Thoughts on “Growing Well” as a technology startup

Talk given by leading software professional Dorothy M Danforth of Danforth Media to the Philadelphia New Technology Community on 12/19/2014.

Every entrepreneur wants to see his or her vision become a huge overnight success. But, there can be a heavy price for moving too far, too fast. How big, and how quickly you can grow and still produce a successful company will depend on many things. In her talk, Dorothy shares stories from her experience working with high growth Silicon Valley startups, and later as a product strategy consultant. She explores how to “grow well,” offering ideas to help you define a pace and scale that will support a healthy, sustainable outcome.

Read the Huffington Post Business Blog review.

Moving from Vision to Design: User-Centered Methods for New Product Definition

Seminar by Dorothy M. Danforth for the IEEE Computer Society Leading Professional Seminar Series. – 30 minutes

It’s a common scenario: A company is planning a new product or significant redesign. There have been various discussions about how the product should have a “great user experience” and “focus on the user.” But, there are also conflicting ideas on what a great experience might entail, along with competing priorities for what the product absolutely must do to be successful in the marketplace. ​

Where to begin? How do you break through the confusion and move towards a clarified product vision? Whether a large established corporation or lean start-up, organizations struggle with progressing from early ideation into clear requirements and a tangible design phase. This webinar will explore ways to leverage user experience design methods in the very early stages of the product life cycle.

This session covers the following:

  • An overview of practical user research and design planning methods useful for early stage products and redesigns
  • Strategies for leveraging these methods to refine a product’s vision and ensure features are tied to user goals
  • Examples of how keeping a focused eye on user needs can help resolve conflicting priorities and promote product team alignment

Scoping and Pricing UX Projects for Consultants

“We’d like to work with you, please send me your rates…”

It’s a simple request, but one that can easily stump a newly minted UX consultant. When starting out, many independent consultants charge based on the going rate offered by the hiring firm. However, there is often room to negotiate a better deal. In other situations, the client is looking to the consultant to set pricing. So how, exactly, do you determine what to charge?

Read more on Giant UX…

Sample Project
EstimateSample

8 Lessons for Entrepreneurs From the Philly New Technology Holiday Meetup

Event review by David Ongchoco for Huffington Post Business Blog.

2014-12-19-IMG_9088-thumb

“Last Wednesday, I got to attend the Philly New Technology Meetup Holiday Extravaganza attended by more than 150 entrepreneurs, innovators and industry leaders. It was an overall exciting time to get to meet like-minded individuals, connect and learn. Here are a few key takeaways and lessons learned from the speakers a the start of the event.”

Read full review:
http://www.huffingtonpost.com/david-ongchoco/8-lessons-for-entrepreneu_b_6357496.html?utm_hp_ref=tw

UX Evolution Mindset & Methods

For the Delaware Valley Human Factors & Ergonomics Society – Nov. 2014

User Experience Design (UX) is a hot term in software these days, but as a relatively new and evolving field there has been confusion as to what this discipline entails and how it relates to other design practices. In this talk, Dorothy will provide an overview of current user experience design and research best practices, touch on how these methods have evolved in recent years, and discuss what many practitioners believe to be core philosophies behind “User Experience Design” as an approach to software design. In addition, Dorothy will walk through a software product lifecycle using case study examples to illustrate how common UX methods can be leveraged to improve a product. The presentation will be followed by an open discussion about where User Experience Design methods parallel or counter other human factors and ergonomics practices.

Takeaways – Participants will walk away with a clear understanding of User Experience Design as a practice, an overview of current methods, and insight into how these practices might relate to broader human factors and ergonomics approaches.

Blog Post

Conducting a User Experience Audit

“If you would understand anything, observe its
beginning and its development.”
– Aristotle

A User Experience (UX) Audit is a secondary research method that pulls together any potentially relevant existing information on your software product and it’s market, and then reviews what you find in the context of your design goals. It’s a straightforward approach I’ve use in just about every strategy project I’ve completed for clients.

This article is intended to illustrate the type of data commonly available that can be helpful. It is by no means an exhaustive list, but should be enough to point you in the right direction. I recommend starting any strategic effort with this approach because it is vital to have a baseline understanding of the market landscape before learning more about the users within that market. In addition, much of the information and insights gleaned from this type of evaluation can be used to directly inform other user research methods, such as persona or survey development. I usually start the audit process as early as possible via Internet research and by requesting client artifacts even when an engagement does not specifically call for a formal “audit”.

Most organizations have existing structures by which they pull in various usage and marketing metrics. However, this data is not usually being evaluated with a user-centric, user behavior mindset. A UX Audit entails skimming through a large volume of data to unearth a relatively small set of relevant informational nuggets. Even still, they are a worthwhile effort, and the audit’s scope can be defined in a manageable way.

Err on the side of collecting more information than less. If a client or stakeholder tells you the content you are requesting is not relevant, it is a good idea to be persistent and review the information for yourself. They might not be looking at with a “UX” mindset.

A UX Audit can be used to answer the following questions:

  • What are the current user trends and expectation for this industry/market?
  • What have we already tried? Of that, what worked and what didn’t?
  • What do our internal stakeholders think about our UX?
    What do they think is needed? Why?
  • What customer issues, needs or problems are indicated in the data? Of those, which might be addressed (in whole or part) by the product’s UX?
  • What ongoing metrics are being collected that UX can use in the future?

Why Conduct an Audit?

The most obvious benefit to conducting an audit is to avoid reinventing the wheel, i.e. conducting new primary research when the same information was already available. An equally productive reason is to help you formulate hypothesis about user behavior and/or issues with your product that you can then investigate further. A supplemental benefit is that the process can help you compile an accessible body of UX knowledge for your products that you can build upon over time

When are Audits Most Useful?

  • When undertaking the development of a new product.
  • Before starting a substantive re-design.
  • When starting a UX practice within an organization.
  • As an exercise for new staff to ramp into product knowledge.
  • If your company has accumulated a large amount of product research data that was conducted by different departments for different uses.

Development Life-cycle

In the context of the software development life-cycle, UX Audits will be most useful if conducted in the high level and detailed requirements gathering stages. Some audit materials can be re-evaluated post production as follow up research to track the effects of the product’s release e.g. customer care data, web analytics, sales data, etc.

Limitations of an Audit

  • There is no guarantee that you will find data that addresses any specific questions. Sometimes the data isn’t there or it is too abstract to be useful in the context of UXD.
  • It can be time consuming and somewhat overwhelming for the beginner to process the information, especially if audits are rarely conducted.
  • Because the audit materials are almost entirely secondary research, you are limited to the methodologies, goals, and potential flaws of the existing research.
  • A good audit involves a wide range of information sources. New companies and some industries might have difficulty pulling together sufficiently diverse sources during the first few audits. In some cases research might need to be purchased.

How to Conduct the Audit

The steps to conducting a UX Audit are straightforward. (source)At a high level, you gather the audit materials together, create a spreadsheet for notes, review the materials, document findings, and then develop your insights or hypothesis for further research.

  1. Pull together your audit materials. The start of a UX audit is an excellent time to engage colleagues from other departments; you can solicit their help in gathering information and get different groups involved with tracking data over time.
    • Stakeholder Interviews – Interviews are a  great starting point for a UX Audit and can go along way to help you gather the materials you need. You will want to speak (one on one) with internal stakeholders such as department heads, product managers and lead developers. You might already speak with these individuals, but interviewing them specifically about market landscape and customer issues may not only provide some good insights, but it can go a long way in gaining buy-in and support for your efforts. Be sure to ask each stakeholder for a list of their recommended materials and follow-up to get them.
    • Sales Statistics – While primarily used by sales and finance, some of this data can be useful for a UX Audit, particularly if you are reviewing the effectiveness of a lead generating, or e-commerce web site. One thing to look for would be information that indicates a problem with the messaging or help functions of the site. For example, if a site selling window curtains, has a higher ratio of online customers who return curtains they bought online due to “wrong size” than their in-store customers, this might indicate a potential issue with the clarity of size information on the site.
    • Call Center Data –Call centers are a great way to gather information about what ticks people off. While much of the information may not be relevant, you can usually gain some key insights about what is missing, or even better, get ideas on what you can proactively improve. As example, the online signup process for a broadband company I worked with had functionality that would tell users if they were eligible for services or not. A UX Audit of call center data showed that a percentage of customers who were initially told they were eligible, were actually ineligible after a closer review of their order details. While we were unable to resolve this programmatically in the short term, armed with this understanding, we were able to modify functionality and messaging to more appropriately set expectation for users.
    • Web Analytics – Quantitative web analytics will give you insight into how many people are visiting your web site, where they are coming from, what they are looking at, and some trends over time. Advanced analytical tools can be implemented and mined to give ever-increasing detail about what people are doing once they get to your site, where they tend to drop off, and where they go once they leave. I’ve had at least one corporate client who were not mining their web logs. Luckily, the data was being collected, just not used within an analytics software. We were able to get them setup with an appropriate package that allowed the team to view  historical and ongoing site usage.
    • Adoption Metrics – Feature adoption/usage metrics are a good way to assess the efficacy of desktop and/or web-based operational support system. These metrics can be system-tracked, but in some cases need to be manually investigated. While fairly easy for a SaaS or mobile provider, if you are a desktop application provider, you might only be able to get your customer adoption metrics through surveys or interviews if these monitoring touch-points have not been build into your system.
    • Feedback and Surveys Results – Many Marketing groups put out feedback forms and/or have released campaign-specific user surveys. These are usually not UX focused, but can offer some insights into your user’s preferences, attitudes and behaviors. Take some time to scan the comment fields and categorize them if possible. You can turn this information into useful statistics with supplemental anecdotes. E.g. “20% of user comments referred to difficulty finding something”. “I can’t find baby buggies, do you still sell them?”
    • Past Reviews & Studies – Any internal market research, usability studies, ethnographic studies, or expert reviews[1] conducted should be audited. Even if a study was conducted for a previous release or under a different context than your project, an audit may reveal some key informational gems and so are worth scanning through. In addition to your own critical eye, it is a wise idea to find out if others in the organization valued the research and why.
    • The Twitterverse and Blogosphere – While not relevant for all companies, review sites, blogs, Facebook, Twitter, and other social networking sites can offer a unique and unfiltered view of how customers perceive your software or website. Try searching your company or product’s name on Google and other sites to see what information is returned. Some of the social networking sites even offer functionality that helps you keep track. If people are talking, you may want to add this type of task to your research calendar at consistent intervals.
    • Specifications – Take a look at product functional specifications, roadmaps, and business analyses. Anything generated relatively recently that can give some background insight into why certain feature or functions have been developed might prove useful. Many of these documents have some relevant facts about users that were researched by the authors. At best it will save you some of your own research time, at worst you’ll have a better understanding of why certain decisions were made for what exists today.
    • Market Research – While market research might give insight into user demographics, this type of research is usually not directly translatable into how you should design your product. However, it can help you
      develop hypotheses about what might work and provide a framework for user personas and user narratives. These hypotheses can be tested through other research methods. Market Research can help you make a reasonable guess at things such as; users’ technology skill level, initial expectations, or level of commitment to completing certain tasks.
  2. Create a Spreadsheet. Create a spreadsheet listing all of the materials you will be auditing. You can use this as a means of tracking what was reviewed, and by whom if more than one person is working on the audit. The spreadsheet can also be used as a central place to put your notes, facts, insights, ideas and questions generated by the review of each audit material.
  3. Review the Materials. Review materials for any relevant information, updating your spreadsheet as you progress. While it sounds daunting, the audit process can be a fairly cursory review, you don’t need to view every bit of detail—just scanning can be sufficient. Remember, you are only trying to pull out the 10-15% of data that will be relevant to the goals of your project.
  4. Categorize Findings. After you’ve completed the review portion of the UX Audit, it’s time to clean up your spreadsheet notes, analyze the information, and categorize any findings. Try to distill what you’ve learned into high-level concepts that are supported by data points and anecdotes, followed by your hypothesis. An oversimplified example of categorized findings would be:
  5. Category – Way Finding (Users ability to find things) Data- 20% of feedback comments referred to users not being able to find what they are looking for.
    – A recent study indicates that if users can’t find an item within 3 minutes they leave the site.
    Anecdote
    “I can’t find baby buggies, do you still sell them?”
    Hypotheses
    We might have an issue with our site’s navigation or taxonomy. We might need a search function.

  6. Schedule a Read-out. Take time to present your findings, setup a read-out for your colleagues. After conducting the read-out, publish your documentation on the intranet, to a wiki, in a document management system or on a file share. Let people know where you’ve placed the information. Now is a good time to tentatively schedule the next Audit on your research calendar.

Additional Resources

  1. Pew Internet Life (www.pewinternet.org) – Internet research, ongoing
  2. Omniture (www.omniture.com)robust analytics package
  3. Web Trends (www.webtrends.com) middle range analytics package
  4. Google Analytics (analytics.google.com) – analytics with useful functions
  5. Forrester (www.forrester.com) – Market research
  6. ComScore (www.comscore.com) – Market research

Questions About This Topic?

I’m happy to answer more in-depth questions about this topic or provide further insight into how this approach might work for you in your company. Post a comment or email me at dorothy [at] danforthmedia [dot] com

Danforth-Media-Logo-SmallABOUT DANFORTH MEDIA
Danforth is a design strategy firm offering software product planning, user research, and user centered design (UCD). We provide credible insights and creative solutions that allow our clients to deliver successful, customer-focused products. Danforth specializes in leveraging user experience design (UXD), design strategy, and design research methodologies to optimize complex multi-platform products for the people who use them.

We transform research into smart, enjoyable, and enduring design.
www.danforthmedia.com

 


[1] Common term used to describe a usability evaluation conducted by a UX specialist.

Article’s ‘Audit All The Things” image source.

Blog Post

Tips for Establishing User Research in an Organization

Q. How was God able to create the world in seven days?
A. No legacy system.

Unlike “God” in this corny systems integrator joke, our realities constantly require us to deal with legacy; systems, processes, and attitudes. This article discusses some critical success factors for getting back credible, valuable research data. It also provides some ideas on project management and obtaining stakeholder buy in. Many of these tips come directly from real project experiences, so examples are provided where applicable.

Be Prepared for Setbacks

A good friend and mentor of mine once told me—after a particularly disappointing professional setback—that it’s quite possible to do everything the “right” way and for things to still not work out. What he was saying, in short, is that not everything is in our control. This understanding liberated me to start looking at all my efforts as a single attempt in an iterative process. As a result, when starting something new to me, I tend to try out a range of things to gauge a baseline for what works, what doesn’t, and what might have future potential. Anyone who has adopted this approach knows that as time passes, a degree of mastery is achieved and your failure percentage decreases. This doesn’t occur because you are better at “going by the book”. It happens because you get better at identifying and accounting for things outside your control.

That said, whether you are trying to introduce a UXD practice into your organization, or are just looking to implement some user research, the most realistic advice I can offer is the Japanese adage; “nanakorobi yaoki” orfall down seven, get up eight.”

Account for Organizational Constraints

You don’t need a cannon to shoot down a canary. Be realistic; consider the appropriateness of your research methods in context with organization’s stage and maturity. It will not matter how well-designed or “best practice” your research is if the results cannot be adequately utilized. Knowing where you are in the lifecycle of a company, product, and brand will help set expectations about results. In this way, a product’s user experience will always be a balance of the needs of the user with the capacity of an organization to meet those needs. If you are developing in a small startup with limited resources, your initial research plans may be highly tactical and validation focused. You’ll probably want to include plans that leverage family and friends for testing, and rely heavily on existing third party or purchased research. Alternately, a larger organization with a mature product will need to incorporate more strategic, primary research as well as have use for more sophisticated methods of presenting and communicating research to a wide range of stakeholders.

Foster a Participatory Culture

Want buy-in? Never “silo” your user research.

Sometimes, particularly in large corporations, there can be a tendency for the different departments to silo or isolate their knowledge. This can be for competitive “Fiefdom Syndrome”[1] reasons, or more often than not, simply a lack of process to effectively distribute information. However many the challenges, there are some very practical, self-serving reasons to actively communicate your UXD processes and research. First, because anyone in your organization who contributes to the software’s design is likely to have an impact on the user experience,  it’s your job to ensure that those people “see what you see” and are empowered to use the data you find. Second, UXD is an art, and like anything else with a degree of subjectivity you’ll need credibility and support if you want your insights and interpretations accepted. Lastly, UXD is complex process with many components; you will get more done faster if you encourage active company wide participation.

Tips on fostering participation:

  • Identify parts of your UX research that could be performed by other functional areas E.g. surveys done by Customer Care, usability guidelines for Quality Assurance, additional focus group questions for Marketing
  • Offer other areas substantive input into user testing, surveys, and other research. E.g. Add graphic design mockups into a wireframe testing cycle and test these with users separate from the wireframes
  • Discuss process integration ideas with the engineering, quality assurance, product, editorial, marketing and other functions. Make sure everyone understands what the touch-points are.
  • In addition to informing your own design efforts, present user research as a service to the broader organization; schedule time for readouts, publish your findings, and invite people to observe testing sessions.

Understand the Goals of your Research

Are you looking to explore user behaviors and investigate future-thinking concepts? Or, are you trying to limit exploration and validate a specific set of functionality? There is a time and place for both approaches, but before you set out on any research effort it is important that you determine the overarching goal of your research. There are some distinct differences in how you implement what I’ll call discovery research vs. validation research—each of which will produce different results.

  • Discovery Research, which can be compared to theoretical research, focuses on the exploration of ideas and investigating users’ preferences and reactions to various concepts. Discovery research is helpful for new products, innovations, and some troubleshooting efforts. This type of research can compliment market research, but unlike focus groups, UXD discovery research explores things such as unique interaction models, or user behaviors when interacting with functionality specific to search or social media.
  • Validation Research, which can be compared to empirical research, focuses more on gauging users’ acceptance of a product already developed, or of a high or low-fidelity prototype that is intended to be a design that will guide development. While a necessary aspect of the UXD process, validation research tends to be more task-based and less likely to call attention to certain false assumptions or superseding flaws in a systems design than discovery research might. An example of a false assumption that might not be revealed in validation research is the belief that an enhanced search tool is necessary. The tool itself may have tested very well, but the task-specific research method failed to reveal that the predominant user behavior is to access your site’s content through a Google search. Therefore, you might have been better off enhancing your SEO before investing in a more advanced search.

Crafting Your Research Strategy

Just as in any project effort, it is vital to first define and document your goals, objectives and approach. Not only does this process help you make key decisions about how you want to move forward, it will serve as your guidepost throughout your project, helping you communicate activities to others. After the research is conducted it provides credibility to your research by explaining your approach. A well -crafted research strategy provides an appropriate breadth and depth for a more complete understanding of what we observe. Consider small incremental research cycles using various tactics. An iterative, multi-faceted methodology allows for more cost efficient project life-cycles. It also mitigates risk since you only invest in what works.

Print

Figure 2: an example of a “Discovery” research strategy developed for a media company. The strategic plan consisted of; audits, iterative prototypes, user testing & various events.

Consider a Research Calendar

 A research calendar can help you manage communication as well as adapt to internal and external changes. It can help track research progress over time, foster collaboration, reduce redundancy, and integrate both team and cross-departmental efforts. A good research calendar should be published, maintained, and utilized by multiple departments. It should include recurring intervals for items such as competitive reviews and audits, calibrated to the needs of your product. Your calendar doesn’t have to be fancy or complicated; you can use an existing intranet, a company-wide Outlook calendar or even a public event manager such as Google or Yahoo! calendar. Regardless of the tool, your research calendar can help prevent people from thinking about user research as a “one up” effort. User research should be considered a living, evolving, integral part of your development process—a maintained research calendar with a dedicated owner appropriately conveys this. If your company is small or you are just getting started with user research, consider collaborating with other departments to include focus groups, UAT events, and QA testing to the calendar as well. Not only will it foster better communication it may also result in the cross pollination of ideas.

Be Willing to Scrap Your “Best” Ideas.

It’s easy to become enamored with an idea or concept, something that interests us, or “feels” right—despite the fact that the research might be pointing in a different direction. Sometimes, this comes from a genuine intuitive belief in the idea, other times it’s simply the result of having invested so much time and/or money into a concept that you’re dealing with a type of loss aversion bias[2]. Even after years of doing this type of research, I have to admit this is still a tough one for me, requiring vigilance. I have seen colleagues, whom I otherwise hold in high regard, hold on to ideas regardless of how clear it is that it’s time to move on.  This tendency, to find what we want to find and to structure research to confirm our assumptions, is a always a possibility in user research. And while it can be
mitigated by process and methodology, it still takes a degree of discipline to step back and play devil’s advocate to your best, most fascinating ideas and look at them in the harsh light of the data being presented to you. Ask yourself: am I observing or advocating? Am I only looking at data that supports my assumptions, casting a blind eye to anything that contradicts? It can be a painful process, but if the idea can hold up to objective scrutiny, you might actually be on to something good.

Questions About This Topic?

I’m happy to answer more in-depth questions about this topic or provide further insight into how this approach might work for you in your company. Post a comment or email me at dorothy [at] danforthmedia [dot] com

Danforth-Media-Logo-SmallABOUT DANFORTH MEDIA
Danforth is a design strategy firm offering software product planning, user research, and user centered design (UCD). We provide credible insights and creative solutions that allow our clients to deliver successful, customer-focused products. Danforth specializes in leveraging user experience design (UXD), design strategy, and design research methodologies to optimize complex multi-platform products for the people who use them.

We transform research into smart, enjoyable, and enduring design.
www.danforthmedia.com

 


[1] Herbold, Robert. The Fiefdom Syndrome: The Turf Battles That Undermine Careers and Companies – And How to Overcome Them. Garden City, NY: Doubleday Business, 2004.
[2] “Loss Aversion Bias is the human tendency to prefer avoiding losses above acquiring gains. Loss aversion was first convincingly demonstrated by Amos Tversky and Daniel Kahneman.” -http://www.12manage.com/description_loss_aversion_bias.html