Exploring Generative AI Leads Me to One Conclusion: Human-Centered Design Matters More Than Ever

AI discussions span everything from utopian promises to technical deep dives. As someone whose career has centered on observing how humans interact with technology, I’ve witnessed how our systems shape—and are shaped by—our lives. Recently, as a strategist and researcher, I’ve been more deeply exploring generative AI’s potential.

One realization stands out: We urgently need human-centered design (HCD) and established user-centered methods to steer AI’s broader societal impact. By adapting UX principles and embedding them into the foundations of these models, we can limit further entrenching existing divides, biases, and superficial interactions. This article isn’t a technical exploration of generative AI; it’s an open invitation to rethink how we might design it with greater intention.

It’s tempting to believe that AI could be humanity’s next great leap forward—a technology that will profoundly and ethically enhance our lives. In a perfect world, AI would evolve through global cooperation, with governments and organizations collaborating to set agreed-upon boundaries in our long-term interdependence with this technology.

Let’s face it, that’s not going to happen.

AI is already deeply enmeshed in profit-driven models, and the commercial forces pushing AI’s rapid development are unlikely to yield a blue-sky vision of AI as a mission-driven, human-first global endeavor. We live in a world where technology will be monetized as quickly as possible, even if it’s still half-baked, and the long-term consequences still need to be considered.

So, what do we do? Our challenge becomes how to guide generative AI’s development without totally abandoning human values while acknowledging its commercialization.

Given this reality, we start by changing the discourse around AI to embrace its potential and risks instead of sensationalizing them. As we develop these systems, we must ensure their training captures and nurtures the best aspects of who we are. To do that, we must create a common framework for what these “best aspects” are while acknowledging that this will include a wide range of relative truths.

Exploratory research asks big questions. Are we taking necessary steps to ensure AI is optimized to enhance humanity’s richness and complexity, or are we reducing ourselves to commodified data points that reflect only the most sensationalized aspects of our behavior? This question should be at the forefront to avoid repeating mistakes.

Digital’s Complex Legacy

From my early work in broadband development to today, I’ve watched well-intended, beneficial advancements also lead to long-term, harmful consequences. Much like current promises about AI, broadband was initially seen as a democratizing force—a tool to empower and connect. High-minded ideals and a utopian vision drove the people who built these services. I know as I was one of those people. I’ve struggled with the pride of being part of an exciting time in our technological history and disillusionment with how some aspects have turned out. As commercial interests took over, particularly with adopting the ad-driven model, the Internet’s original promise was transformed.

Many of the unintended consequences we are dealing with now were foreseen by many, but it didn’t matter; the dot-com bubble burst created an existential crisis for the industry. It forced profit-or-perish decisions that shaped much of today’s Internet, turning hopeful optimism into a commercial necessity. The same unchecked optimism and as much doomsday prediction now surround AI. Might we use AI development as an opportunity to address the negative aspects of our past technological choices head-on?

Due to how broadband shifted to survive, generative AI has been given a complex inheritance. Profit-optimized content has been the foundation for its simulated understanding of human interaction.

Shallow Data In, Shallow Interactions Out

Too often, technology has forced us to adapt to its limitations rather than expand ours. An attempt to address this was in early advocacy for natural language interfaces—aligning systems with how people organically communicate and making interactions easier rather than forcing users to conform to rigid, efficiency-driven workflows. AI should be no different, especially given its potential to be deeply woven into our daily lives.

Today, most AI systems are built on data from platforms like Google, Facebook, and TikTok. These platforms prioritize engagement, rewarding the most attention-grabbing content, not necessarily the most meaningful. While vast and seemingly comprehensive, it represents inaccurate and incomplete versions of ourselves. Consider for a moment those sources. Do they reflect our goals as a species?

Some AI proponents believe techniques like cross-domain learning and transfer learning—where AI systems are trained on data from multiple domains—can mitigate biases and data gaps. These techniques can improve AIs’ ability to handle more complex tasks. However, these technology-focused solutions are band-aids unlikely to address the underlying design failure: the data itself.

One of my favorite tech dad jokes came from a colleague at a systems integrator: “How did God create the universe in 7 days? There was no legacy system.”

I’m passionate about technology, but we must be pragmatic in its implementation, and the devil is in the details. Surrounded by the hype of AI, it’s easy for tech leaders to forget: Legacy systems have taught us once systems are implemented, they tend to persist, flaws and all. Retooling from scratch is costly and time-consuming, meaning the foundations of today’s AI systems will continue to shape tomorrow’s world. Without a focused effort to integrate context-rich, qualitative information, we risk building AI systems that fail to enrich our lives, limit how we engage with the world, and codify narrow representations of ourselves well into our children’s futures.

We’ve Already Adapted to Technology’s Distorted Lens

Seemingly insignificant behaviors can reveal profound insights that broad datasets miss. Hesitations, pauses, or other external behavioral changes could indicate a point in a workflow that inherently requires a higher cognitive load or a genuine barrier to completing a task—big data, which typically only captures outcomes, fails to detect this. There is an even more concerning fact: users often adapt to poorly designed system experiences and believe their struggles stem from their inadequacies rather than the technology itself.

We’ve watched technology move beyond being a tool we adapt to in small ways into a force that shapes our culture in significant ways we don’t control, eroding our collective feeling of agency. Narratives pushed by the algorithms behind these platforms don’t just diminish our senses of self-worth and reality but also encourage consumerism, impulsivity, and emotional reactivity. As AI systems inherit data from these platforms, they reinforce these interactions, codifying hyper-consumerism, performative relationships, and attention manipulation.

Our lived experiences—unique, messy, and deeply meaningful—are being flattened, standardized, and spit back at us in increasingly distorted forms. Not only are we facing a loss of biodiversity but also of human diversity, and it’s unfolding on our watch.

Correcting Course

At the heart of AI’s current development lies a mismatch between its commercial incentives and society’s broader needs. The race to deploy AI to boost stockholder confidence undermines its real potential. Yet, we can correct course, making conscious choices to align AI’s evolution with our better angels.

Pivoting to a Long-Term Investment Model

We should adopt a long-term investment model that encourages careful consideration before releasing AI technologies to the public instead of using us as experiments and training fodder. It’s tempting to think that if we need more data, we can simply allow users to provide it in real-time. However, this data is a feedback loop of simplified tasks and interactions, and I question the ethics of this approach.

Companies can still find a win-win. Jeff Bezos’s well-known approach to Amazon is a powerful example of this kind of long-term investment thinking. From the outset, Bezos emphasized long-term infrastructure over immediate profits, understanding that the company could grow exponentially by building the necessary foundation.

Ethical Walled Gardens

Another crucial component of redirecting AI’s future is the concept of ethical walled gardens. “Walled gardens” often mean monopolistic ecosystems controlled by tech giants like Google or Facebook, but an ethical walled garden can have different purposes. Secure, gated spaces where clear moral principles and safeguards govern AI development and deployment.

Privacy-focused laws, such as the European Union’s General Data Protection Regulation (GDPR), help protect user privacy and create secure spaces for data collection. However, they will not inherently address the quality or completeness of AI training data. While these frameworks safeguard data from exploitation, they must be paired with strategic efforts to improve the collected data.

The Value of Qualitative Data

Addressing AI’s inherited blind spots to build better systems indicates investing heavily in purpose-generated qualitative data such as ethnographic studies, contextual observations, and curated sociological findings. Valuable insights can come from methods that capture real-life, context-rich interactions beyond the direct influence of consumerism. Armed with this goal-focused content, AI can begin to internalize the diverse and frequently counterintuitive ways humans engage with the world, providing essential balance.

To ensure AI systems are trained on meaningful data, we should continue to optimize them to incorporate and use unstructured data more effectively. This will require AI models that can contextualize human behavior. Some research suggests that combining big data with smaller, qualitative datasets—often called data triangulation—can help AI systems better reflect the richness of our experiences.

Generative Qualitative Research for AI

Qualitative data is used today to augment and refine AI models; this is not new. However, this unstructured data (an estimated 80% of all content available) isn’t purpose-built for the role. As discussed, it is subject to the same biases and blind spots as other data. Qualitative data requires significant human effort to clean, code, and curate. A partial solution currently being explored uses a hybrid model, where researchers use AI tools to help codify the data. The importance of human oversight cannot be overstated in this effort.

A complementary and amplifying approach to mining our unstructured data treasure trove is to conduct targeted generative studies that can be more heavily weighted to assist AI models in deciphering the raw data they consume. Rigorous methodologies and research objectives based on collectively defined goals can ensure that specific concepts and dynamics the AI model is trained on are conveyed in a way that requires less human oversight in the long term.

A framework for generative research for AI model training can and should be defined. Methods used in traditional software systems today can provide insight. Foundational principles for current user research are one-on-one interviews (vs. focus groups) and a heavy focus on observational and contextual methods. When properly structured, implemented, and analyzed by people with expertise in human-centered design, this type of research can be highly directional and capture a wide range of targeted, nuanced insights. The results of these studies, codified with the help of AI and weighted by humans, could provide robust training benchmarks.

As we shape the future of AI, let’s move beyond technological optimism or doomism, roll up our sleeves, and work to address our industry’s complex legacy. The time for developing mutually defined goals for human-centered design in AI is now. We have a choice: allow this technology to evolve only to reflect what’s commercially viable or intervene to ensure AI represents us authentically. Answers lie in investing in richer data, focusing on human experience, and refusing to let AI reinforce a flaw that has plagued our digital spaces for decades.

Would you like to learn more about human-centered design and AI? Check out the following materials:

About the Author
Dorothy is a digital strategist and researcher who works with companies to blend human-centered research with emerging technologies to navigate complex challenges.

Tips for Establishing User Research in an Organization

Q. How was God able to create the world in seven days?
A. No legacy system.

Unlike “God” in this corny systems integrator joke, our realities constantly require us to deal with legacy; systems, processes, and attitudes. This article discusses some critical success factors for getting back credible, valuable research data. It also provides some ideas on project management and obtaining stakeholder buy in. Many of these tips come directly from real project experiences, so examples are provided where applicable.

Be Prepared for Setbacks

A good friend and mentor of mine once told me—after a particularly disappointing professional setback—that it’s quite possible to do everything the “right” way and for things to still not work out. What he was saying, in short, is that not everything is in our control. This understanding liberated me to start looking at all my efforts as a single attempt in an iterative process. As a result, when starting something new to me, I tend to try out a range of things to gauge a baseline for what works, what doesn’t, and what might have future potential. Anyone who has adopted this approach knows that as time passes, a degree of mastery is achieved and your failure percentage decreases. This doesn’t occur because you are better at “going by the book”. It happens because you get better at identifying and accounting for things outside your control.

That said, whether you are trying to introduce a UXD practice into your organization, or are just looking to implement some user research, the most realistic advice I can offer is the Japanese adage; “nanakorobi yaoki” orfall down seven, get up eight.”

Account for Organizational Constraints

You don’t need a cannon to shoot down a canary. Be realistic; consider the appropriateness of your research methods in context with organization’s stage and maturity. It will not matter how well-designed or “best practice” your research is if the results cannot be adequately utilized. Knowing where you are in the lifecycle of a company, product, and brand will help set expectations about results. In this way, a product’s user experience will always be a balance of the needs of the user with the capacity of an organization to meet those needs. If you are developing in a small startup with limited resources, your initial research plans may be highly tactical and validation focused. You’ll probably want to include plans that leverage family and friends for testing, and rely heavily on existing third party or purchased research. Alternately, a larger organization with a mature product will need to incorporate more strategic, primary research as well as have use for more sophisticated methods of presenting and communicating research to a wide range of stakeholders.

Foster a Participatory Culture

Want buy-in? Never “silo” your user research.

Sometimes, particularly in large corporations, there can be a tendency for the different departments to silo or isolate their knowledge. This can be for competitive “Fiefdom Syndrome”[1] reasons, or more often than not, simply a lack of process to effectively distribute information. However many the challenges, there are some very practical, self-serving reasons to actively communicate your UXD processes and research. First, because anyone in your organization who contributes to the software’s design is likely to have an impact on the user experience,  it’s your job to ensure that those people “see what you see” and are empowered to use the data you find. Second, UXD is an art, and like anything else with a degree of subjectivity you’ll need credibility and support if you want your insights and interpretations accepted. Lastly, UXD is complex process with many components; you will get more done faster if you encourage active company wide participation.

Tips on fostering participation:

  • Identify parts of your UX research that could be performed by other functional areas E.g. surveys done by Customer Care, usability guidelines for Quality Assurance, additional focus group questions for Marketing
  • Offer other areas substantive input into user testing, surveys, and other research. E.g. Add graphic design mockups into a wireframe testing cycle and test these with users separate from the wireframes
  • Discuss process integration ideas with the engineering, quality assurance, product, editorial, marketing and other functions. Make sure everyone understands what the touch-points are.
  • In addition to informing your own design efforts, present user research as a service to the broader organization; schedule time for readouts, publish your findings, and invite people to observe testing sessions.

Understand the Goals of your Research

Are you looking to explore user behaviors and investigate future-thinking concepts? Or, are you trying to limit exploration and validate a specific set of functionality? There is a time and place for both approaches, but before you set out on any research effort it is important that you determine the overarching goal of your research. There are some distinct differences in how you implement what I’ll call discovery research vs. validation research—each of which will produce different results.

  • Discovery Research, which can be compared to theoretical research, focuses on the exploration of ideas and investigating users’ preferences and reactions to various concepts. Discovery research is helpful for new products, innovations, and some troubleshooting efforts. This type of research can compliment market research, but unlike focus groups, UXD discovery research explores things such as unique interaction models, or user behaviors when interacting with functionality specific to search or social media.
  • Validation Research, which can be compared to empirical research, focuses more on gauging users’ acceptance of a product already developed, or of a high or low-fidelity prototype that is intended to be a design that will guide development. While a necessary aspect of the UXD process, validation research tends to be more task-based and less likely to call attention to certain false assumptions or superseding flaws in a systems design than discovery research might. An example of a false assumption that might not be revealed in validation research is the belief that an enhanced search tool is necessary. The tool itself may have tested very well, but the task-specific research method failed to reveal that the predominant user behavior is to access your site’s content through a Google search. Therefore, you might have been better off enhancing your SEO before investing in a more advanced search.

Crafting Your Research Strategy

Just as in any project effort, it is vital to first define and document your goals, objectives and approach. Not only does this process help you make key decisions about how you want to move forward, it will serve as your guidepost throughout your project, helping you communicate activities to others. After the research is conducted it provides credibility to your research by explaining your approach. A well -crafted research strategy provides an appropriate breadth and depth for a more complete understanding of what we observe. Consider small incremental research cycles using various tactics. An iterative, multi-faceted methodology allows for more cost efficient project life-cycles. It also mitigates risk since you only invest in what works.

Print

Figure 2: an example of a “Discovery” research strategy developed for a media company. The strategic plan consisted of; audits, iterative prototypes, user testing & various events.

Consider a Research Calendar

 A research calendar can help you manage communication as well as adapt to internal and external changes. It can help track research progress over time, foster collaboration, reduce redundancy, and integrate both team and cross-departmental efforts. A good research calendar should be published, maintained, and utilized by multiple departments. It should include recurring intervals for items such as competitive reviews and audits, calibrated to the needs of your product. Your calendar doesn’t have to be fancy or complicated; you can use an existing intranet, a company-wide Outlook calendar or even a public event manager such as Google or Yahoo! calendar. Regardless of the tool, your research calendar can help prevent people from thinking about user research as a “one up” effort. User research should be considered a living, evolving, integral part of your development process—a maintained research calendar with a dedicated owner appropriately conveys this. If your company is small or you are just getting started with user research, consider collaborating with other departments to include focus groups, UAT events, and QA testing to the calendar as well. Not only will it foster better communication it may also result in the cross pollination of ideas.

Be Willing to Scrap Your “Best” Ideas.

It’s easy to become enamored with an idea or concept, something that interests us, or “feels” right—despite the fact that the research might be pointing in a different direction. Sometimes, this comes from a genuine intuitive belief in the idea, other times it’s simply the result of having invested so much time and/or money into a concept that you’re dealing with a type of loss aversion bias[2]. Even after years of doing this type of research, I have to admit this is still a tough one for me, requiring vigilance. I have seen colleagues, whom I otherwise hold in high regard, hold on to ideas regardless of how clear it is that it’s time to move on.  This tendency, to find what we want to find and to structure research to confirm our assumptions, is a always a possibility in user research. And while it can be
mitigated by process and methodology, it still takes a degree of discipline to step back and play devil’s advocate to your best, most fascinating ideas and look at them in the harsh light of the data being presented to you. Ask yourself: am I observing or advocating? Am I only looking at data that supports my assumptions, casting a blind eye to anything that contradicts? It can be a painful process, but if the idea can hold up to objective scrutiny, you might actually be on to something good.

Questions About This Topic?

I’m happy to answer more in-depth questions about this topic or provide further insight into how this approach might work for you in your company. Post a comment or email me at dorothy [at] danforthmedia [dot] com

Danforth-Media-Logo-SmallABOUT DANFORTH MEDIA
Danforth is a design strategy firm offering software product planning, user research, and user centered design (UCD). We provide credible insights and creative solutions that allow our clients to deliver successful, customer-focused products. Danforth specializes in leveraging user experience design (UXD), design strategy, and design research methodologies to optimize complex multi-platform products for the people who use them.

We transform research into smart, enjoyable, and enduring design.
www.danforthmedia.com

 


[1] Herbold, Robert. The Fiefdom Syndrome: The Turf Battles That Undermine Careers and Companies – And How to Overcome Them. Garden City, NY: Doubleday Business, 2004.
[2] “Loss Aversion Bias is the human tendency to prefer avoiding losses above acquiring gains. Loss aversion was first convincingly demonstrated by Amos Tversky and Daniel Kahneman.” -http://www.12manage.com/description_loss_aversion_bias.html

Conducting a Solid UX Competitive Analysis

“Competition brings out the best in products
and the worst in people.” –  David Sarnoff

Most people are familiar with the concept of a competitive analysis; it’s a fairly standard business term to describe identifying and evaluating your competition in the marketplace. In the case of UXD, a competitive analysis is used to evaluate how a given product’s competition stacks up against usability standards and overall user experience.  A comparative analysis is a term I’ve often used to describe the review of applications or website that are not in direct competition with a product, but may have similar processes or interface elements that are worth reviewing.

Often, when a competitive review is conducted, the applications or websites are reviewed against a set of fairly standard usability principles (or heuristics) such as layout consistency, grouping of common tasks, link affordance, etc. Sometimes, however, the criteria can be more broadly defined to include highlights of interesting interaction models, notable functionality and/or other items that might be useful in the context of the product being designed and/or goals of a specific release.

The Expert Review
Competitive reviews can be done in conjunction with an “expert” review which is a usability evaluation of the existing product. If doing both a competitive and expert review, it’s helpful to start out with the competitive review and then conduct the expert review using the same criteria. Completing the competitive review first allows you to judge your own product relative to your competition.

Why Conduct a Competitive Analysis?

  • Understand how the major competition in your space is handling usability
  • Understand where your product stands in reference to its competition
  • Idea generation on how to solve various usability issues
  • Get an idea of what it might take to gain a competitive edge via usability/UX
  • If a thorough competitive review has never been conducted.
  • When a new product, or major game-changing rebuild is being considered.
  • Annually or bi-Annually to keep an eye on trends in your industry and on the web (such as changes in how social networking sites are integrated)

When is a Competitive Analysis Most Useful?

Competitive analysis is best done during early planning and requirements gathering stages. It can be conducted independent from a specific project cycle, or if used with a more focused criteria, it can help with the goals for a specific release.

Limitations of a Competitive Analysis

  • A competitive analysis can help you understand what it will take to come up to par with your competitors; however, it cannot show you how you can innovate and lead.
  • Insights can be limited by the knowledge level and/or evaluation abilities of the reviewer.
  • They can be time consuming to conduct and need to be re-done on a regular basis.

How to Conduct a UXD Competitive Analysis

  1. Select your competition. On average, I would recommend to target no less than five, but no more than ten of your top competitors. The longer the competitive list is, the more difficult it will be to do a sufficiently thorough investigation. In addition, there becomes a point of diminishing returns where there is not much new going on in the space that hasn’t already been brought to light by a previous competitor.
  2.  
    Consider this…

    When selecting a list of competitors, instead of just asking, who does what we do? Think about user’s alternatives. Ask, who or what is mostly likely to keep users from using our software or going to our website? While not normally thought of as “competition”, alternatives for an operations support system could be an excel spreadsheet or printed paper forms.

  3. Define the assessment criteria. It is important to define your criteria before you get started to be sure you are consistent in what you look for during your reviews. Take a moment to consider some of the issues and goals your organization is dealing with and some hypotheses you uncovered during your audit and try to incorporate these into the criteria as possible. Criteria should be specific and assessable. Here is a short excerpt of a criteria used in an e-commerce site evaluation:
    i.    Customer reviews and ratings?
    ii.    Can you wish list your items to refer back to later?
    iii.    In-store pickup option?
    iv.    Product availability indication?
    v.    Can you compare other items against each other?
    vi. What general shopping features are available?
  4. Create a Spreadsheet. Put your entire assessment criteria list in the first column, and the names of your competition along the top header row. Be sure to leave a general comments line at the bottom of your criteria list; you will use this for notes during individual reviews. Some of the evaluation might be relative (i.e. rate quality of imagery relative to other sites 1-10), so it is particularly helpful to have one spreadsheet as you work through each of your reviews.
  5. Gather your materials. Collect the competitive site URLs, software, or devices that you will be reviewing so they will be readily available.  A browser with tabbed browsing works great for web reviews. A tip for a mobile device application is that often simulators can be downloaded that allow you to display the mobile device software on your computer.
  6. Start Reviewing. One at a time, go down the criteria list while looking through the application and enter your responses. It can be helpful to use a double screen with the application on one view and the spreadsheet on the other. Take your time and try to be as observant as possible; you are looking for both good and bad points to report. As you review, write down notes on what you liked, what annoyed you and any interesting widgets you see. Take screen captures of interesting or relevant screens as you do each review.
  7. Prepare the Analysis. Create an outline of the review document including a summary area and a section for each individual review. Paste the assessment results and your notes from the spreadsheet into the document and use as a starting point for writing the report. You may need to grab additional screen captures of specific things that will be in your evaluation.
  8. Summarize your Insights. Now that you have the reviews done, you can look back what data pops out as most relevant. Some of the criteria results can be translated into summary charts and graphs to better illustrate the information.
  9. Schedule a Read Out. Take time to present your findings, setup a read-out for your colleagues. You may want to create a very high-level power point presentation of some of the more interesting point’s from your review. After conducting the read-out, publish your documentation and let people know where you’ve placed the information.

Competitive Assessment Rubric

If you don’t have time for a full written competitive analysis, you can evaluate your competition with an assessment rubric.  Because it results in a clear ranking, the rubric is a good “at a glance” way of communicating a software’s relative strengths and weaknesses to clients. Some things to note about this evaluation method:

  • It is not a scientific analysis; it’s a short-cut for communicating your assessment of the systems reviewed. Like judging a talent competition, subjective ratings (i.e. “eight out of 10”) are inherently imprecise. However, if you use consistent, pre-defined criteria you should end up with a realistic representation of your comparative rankings.
  • When creating the assessment criteria it is important to select attributes that are roughly equivalent in value. In the example below, “Template Layouts” and “Browsing & Navigation” were equally important to the overall effectiveness of the sites reviewed.

The following rubric was created to evaluate mobile phone activation sites, but this approach can be adapted to create ranking metrics for any application.

1. First, create the criteria and rating system by which you will evaluate each system.

1 – Poor 2 – Average 3 – Excellent
Marketing Commerce Integration Option to “Buy Now” is not offered in marketing pages or users are sent to a third party site. The option to “Buy Now” is available from the marketing pages but there are some usability issues with layout and transition The marketing and commerce sites are well integrated and provide users with an almost seamless transition
Template Layouts (Commerce) The basic layout is not consistent from page to page and/or the activity areas within the layout are not clearly grouped by type of user task. The layout is mostly consistent from page to page and major activity areas are grouped by task type. Some areas with information heavy content or more complex user tasks deviate from the established layout paradigms. The site shows a high level of continuity both in page to page transitions and task-type groupings. Information heavy content and complex user tasks are well thought out and intuitive relative to the site’s established layout paradigms.
Browsing & Navigation The site lacks a cohesive Information Architecture. Information is not in a clear top-down hierarchy. There are numerous “orphan” or pop-up pages that do not fit within the site structure. Similar content is duplicated in multiple areas or is presented in multiple navigational contexts. The site has a structured Information Architecture. Secondary and Tertiary navigation items are related to parent elements, but there may be multiple menus unrelated to the broader structure. There may be orphan pages of detail or less relevant information. The Information Architecture is highly cohesive. Information is structured with a clear understanding of user goals. Everything has a logical place within the architecture; secondary menus are incorporated into the site structure or clearly transitioned.
Terminology & Labeling (Commerce) Terminology and labeling is inconsistent, confusing, or inaccurate. Different terms are used to represent the same concept. Some terms may not adhere to a common understanding of their meanings. Terminology and labeling is consistent but could be more intuitive. Some unnecessary industry specific terminology or uncommon terms are used. Terminology and naming is both intuitive and consistent. Only necessary industry specific terminology is used, in context, with help references.

2.    Next, evaluate the competition along with your own system, scoring the results.

  • Marketing & Commerce Integration: how well the site handled the user’s transition from browsing product information to after making an online purchase decision.
  • Template Layouts: how clear and consistent the overall layout is and how well the layout elements translate to different areas of the site.
  • Browsing & Navigation: relative clarity and consistency of the information architecture and overall ease of browsing.
  • Terminology & Labeling: relative clarity and consistency of the language use and element naming.

 


Marketing & Commerce Integration Template Layouts Browsing & Navigation Terminology & Labeling

Total

Our System

2 (Average)

1 (Poor)

2 (Average)

3 (Excellent)

8

Company A

3

3

2

3

11

Company B

2

2

2

2

8

Company C

1

3

1

2

7

Company D

2

3

2

3

10

Company E

3

Company F
Company G
Company H

 

3.    Once your table is complete, you can sort on the totals to see your rough rankings.

User Experience Design Myths

This article attempts to dispel some common myths you may have heard about adopting user research or a user-centered design approach.

We Don’t Need UXD Because…

  • Our users are early adopters, or are our employees…
  • We are just trying to create a technical proof of concept…
  • We work iteratively in an Agile development environment…

Whatever the reason, consider this, there is no demographic that likes a poorly designed product. UXD foundations are arguably the most direct and efficient means to a well designed product. They can be adapted to any type of application or product lifecycle stage. Therefore, while your product may not need a complex UXD process or a specialized UXD team, the basics of user-centric design are always appropriate and will unilaterally result in a better product when appropriately applied.

UXD is Too Expensive

UXD methods can easily be modified to fit your budget. Each research method in this guide can be adapted to be implemented inexpensively and with very meager resources. In fact, an appropriately defined research plan should reduce costs by getting you much closer to what your end users want with far fewer problems post release. Maintenance costs related to unmet or unforeseen user needs can be as high as 80% of the overall development lifecycle costs. (Pressman, 1992) There is good reason why UXD is a growing field within software design; it shows a strong ROI.[1]

UXD Slows Down the Development Cycle

UXD can reduce and simplify the product development process. A common misconception about user-centric design is that it adds to, and slows down the development cycle. And yes, UXD can be incorporated in such a way that it needlessly adds time to the process. However, it does not have to. In fact, as early as 1991, a study found that usability engineering demonstrated reductions in the product development cycle by 33 to 50%. (Bosert, 1991) A well integrated, process-appropriate UXD effort will not only produce a more successful product, but will reduce development time and costs.

UXD is mostly useful for Consumer Products

Some form of UXD is useful for any system a user can interact with. Can users easily find what they need? Are common tasks simplified and not an unnecessary drudgery? Are the labels clear and do they use commonly accepted terms? All these UX related questions are as relevant to a consumer-focused e-commerce site as they would be to a billing operations system. UX Methods can be adapted and applied to information/marketing interfaces as well as transactional applications. The main difference is the goal, i.e. successful UXD on a consumer product usually drives more sales; while success in an operations system usually takes the form of higher adoption rates and increased productivity.

UXD Only Affects the Presentation Layer

UXD is more than skin deep. Don’t get me wrong, with a background in the arts; I see the value in making things look good, and most people respond to a pleasing visual design. But thinking of UXD as a presentation layer process will substantively limit its ability to improve your product. A good example of how UXD has an impact on functionality is in the case of faceted categorization for parametric searches[1]. User Research can not only help you determine how these fields should be laid-out, but how to categorize the data, what to name categories, and what kind of search groupings users want.

User Research will help us Define the Right User Experience

Well, you can try. But it is important to understand that there is no single “correct” user experience for a product. The process of interpreting the results of User Research and deciding how the resultant insights should be translated into final designs is probably best considered an art informed by science. It’s quite possible (and common) to have two totally different, yet viable, directions that both address the same user goals and requirements. (In these cases, additional criteria can be used to determine what direction to take i.e. budget, time, brand alignment, other features etc.) Despite its inherent lack of absolutes, User Research can, however, give you the best educated-guess possible regarding your users behavior, addressing upwards of 80% of issues before taking a product to market.

User Research is Market Research

User research is not market research. While there is a strong brand/marketing component to user experience design, the research methods are distinct with different methodologies, considerations and results. Unlike market research, UX research is less concerned with what features are available, or what the marketing messages are, as it is with how successful a specific design is in articulating its features and how usable and accessible the product is for the end user.

Market research is business-centric; it uses the analysis of data in an attempt to move people to action. While user research is (you guessed it) user-centric, its goal is to analyze user behaviors and preference to better design for them. Often, these two are means to the same end. Sometimes however, there is a conflict. An e-commerce website could have a promotional pop up screen that most users find annoying, despite the fact that it generates a good deal of revenue. The UXD practitioner should be free to advocate for the users’ goals, and the marketer for the business’ goals. Any resulting compromises should be considered in the broader context of the company’s brand.

User Research is only useful During Requirements Gathering

While very useful during requirements gathering, there are real benefits of incorporating this research throughout the software development lifecycle. Validation is a big part of user research. What in the design phase sounded like a good idea and tested well, might not work as well in its final implementation. There are a number of inevitable changes and revisions that occur during development. It’s important to retest and validate your release after all the pieces of the puzzle have been put together. This can be achieved with participant-based user testing or by providing structured feedback mechanisms in a beta or limited pilot program.

Here are some sample User Research methods for each phase of the software development lifecycle:

  • High-Level Requirements

o    Ethnographic Studies

o    Concept Prototypes & Testing

o    Surveys with a Broad Focus

o    Competitive Reviews

  • Detailed Requirements

o    Validation Tests for Screen and Workflows

o    Tactical Surveys

o    Graphic Design Reviews

  • Development & QA Testing

o    Validate Changes and Workarounds

  • Release/Deployment

o    Feedback Forms

o    User-centric Beta or Pilot Testing

  • Maintenance

o    A/B Split Testing