Exploring Generative AI Leads Me to One Conclusion: Human-Centered Design Matters More Than Ever

AI discussions span everything from utopian promises to technical deep dives. As someone whose career has centered on observing how humans interact with technology, I’ve witnessed how our systems shape—and are shaped by—our lives. Recently, as a strategist and researcher, I’ve been more deeply exploring generative AI’s potential.

One realization stands out: We urgently need human-centered design (HCD) and established user-centered methods to steer AI’s broader societal impact. By adapting UX principles and embedding them into the foundations of these models, we can limit further entrenching existing divides, biases, and superficial interactions. This article isn’t a technical exploration of generative AI; it’s an open invitation to rethink how we might design it with greater intention.

It’s tempting to believe that AI could be humanity’s next great leap forward—a technology that will profoundly and ethically enhance our lives. In a perfect world, AI would evolve through global cooperation, with governments and organizations collaborating to set agreed-upon boundaries in our long-term interdependence with this technology.

Let’s face it, that’s not going to happen.

AI is already deeply enmeshed in profit-driven models, and the commercial forces pushing AI’s rapid development are unlikely to yield a blue-sky vision of AI as a mission-driven, human-first global endeavor. We live in a world where technology will be monetized as quickly as possible, even if it’s still half-baked, and the long-term consequences still need to be considered.

So, what do we do? Our challenge becomes how to guide generative AI’s development without totally abandoning human values while acknowledging its commercialization.

Given this reality, we start by changing the discourse around AI to embrace its potential and risks instead of sensationalizing them. As we develop these systems, we must ensure their training captures and nurtures the best aspects of who we are. To do that, we must create a common framework for what these “best aspects” are while acknowledging that this will include a wide range of relative truths.

Exploratory research asks big questions. Are we taking necessary steps to ensure AI is optimized to enhance humanity’s richness and complexity, or are we reducing ourselves to commodified data points that reflect only the most sensationalized aspects of our behavior? This question should be at the forefront to avoid repeating mistakes.

Digital’s Complex Legacy

From my early work in broadband development to today, I’ve watched well-intended, beneficial advancements also lead to long-term, harmful consequences. Much like current promises about AI, broadband was initially seen as a democratizing force—a tool to empower and connect. High-minded ideals and a utopian vision drove the people who built these services. I know as I was one of those people. I’ve struggled with the pride of being part of an exciting time in our technological history and disillusionment with how some aspects have turned out. As commercial interests took over, particularly with adopting the ad-driven model, the Internet’s original promise was transformed.

Many of the unintended consequences we are dealing with now were foreseen by many, but it didn’t matter; the dot-com bubble burst created an existential crisis for the industry. It forced profit-or-perish decisions that shaped much of today’s Internet, turning hopeful optimism into a commercial necessity. The same unchecked optimism and as much doomsday prediction now surround AI. Might we use AI development as an opportunity to address the negative aspects of our past technological choices head-on?

Due to how broadband shifted to survive, generative AI has been given a complex inheritance. Profit-optimized content has been the foundation for its simulated understanding of human interaction.

Shallow Data In, Shallow Interactions Out

Too often, technology has forced us to adapt to its limitations rather than expand ours. An attempt to address this was in early advocacy for natural language interfaces—aligning systems with how people organically communicate and making interactions easier rather than forcing users to conform to rigid, efficiency-driven workflows. AI should be no different, especially given its potential to be deeply woven into our daily lives.

Today, most AI systems are built on data from platforms like Google, Facebook, and TikTok. These platforms prioritize engagement, rewarding the most attention-grabbing content, not necessarily the most meaningful. While vast and seemingly comprehensive, it represents inaccurate and incomplete versions of ourselves. Consider for a moment those sources. Do they reflect our goals as a species?

Some AI proponents believe techniques like cross-domain learning and transfer learning—where AI systems are trained on data from multiple domains—can mitigate biases and data gaps. These techniques can improve AIs’ ability to handle more complex tasks. However, these technology-focused solutions are band-aids unlikely to address the underlying design failure: the data itself.

One of my favorite tech dad jokes came from a colleague at a systems integrator: “How did God create the universe in 7 days? There was no legacy system.”

I’m passionate about technology, but we must be pragmatic in its implementation, and the devil is in the details. Surrounded by the hype of AI, it’s easy for tech leaders to forget: Legacy systems have taught us once systems are implemented, they tend to persist, flaws and all. Retooling from scratch is costly and time-consuming, meaning the foundations of today’s AI systems will continue to shape tomorrow’s world. Without a focused effort to integrate context-rich, qualitative information, we risk building AI systems that fail to enrich our lives, limit how we engage with the world, and codify narrow representations of ourselves well into our children’s futures.

We’ve Already Adapted to Technology’s Distorted Lens

Seemingly insignificant behaviors can reveal profound insights that broad datasets miss. Hesitations, pauses, or other external behavioral changes could indicate a point in a workflow that inherently requires a higher cognitive load or a genuine barrier to completing a task—big data, which typically only captures outcomes, fails to detect this. There is an even more concerning fact: users often adapt to poorly designed system experiences and believe their struggles stem from their inadequacies rather than the technology itself.

We’ve watched technology move beyond being a tool we adapt to in small ways into a force that shapes our culture in significant ways we don’t control, eroding our collective feeling of agency. Narratives pushed by the algorithms behind these platforms don’t just diminish our senses of self-worth and reality but also encourage consumerism, impulsivity, and emotional reactivity. As AI systems inherit data from these platforms, they reinforce these interactions, codifying hyper-consumerism, performative relationships, and attention manipulation.

Our lived experiences—unique, messy, and deeply meaningful—are being flattened, standardized, and spit back at us in increasingly distorted forms. Not only are we facing a loss of biodiversity but also of human diversity, and it’s unfolding on our watch.

Correcting Course

At the heart of AI’s current development lies a mismatch between its commercial incentives and society’s broader needs. The race to deploy AI to boost stockholder confidence undermines its real potential. Yet, we can correct course, making conscious choices to align AI’s evolution with our better angels.

Pivoting to a Long-Term Investment Model

We should adopt a long-term investment model that encourages careful consideration before releasing AI technologies to the public instead of using us as experiments and training fodder. It’s tempting to think that if we need more data, we can simply allow users to provide it in real-time. However, this data is a feedback loop of simplified tasks and interactions, and I question the ethics of this approach.

Companies can still find a win-win. Jeff Bezos’s well-known approach to Amazon is a powerful example of this kind of long-term investment thinking. From the outset, Bezos emphasized long-term infrastructure over immediate profits, understanding that the company could grow exponentially by building the necessary foundation.

Ethical Walled Gardens

Another crucial component of redirecting AI’s future is the concept of ethical walled gardens. “Walled gardens” often mean monopolistic ecosystems controlled by tech giants like Google or Facebook, but an ethical walled garden can have different purposes. Secure, gated spaces where clear moral principles and safeguards govern AI development and deployment.

Privacy-focused laws, such as the European Union’s General Data Protection Regulation (GDPR), help protect user privacy and create secure spaces for data collection. However, they will not inherently address the quality or completeness of AI training data. While these frameworks safeguard data from exploitation, they must be paired with strategic efforts to improve the collected data.

The Value of Qualitative Data

Addressing AI’s inherited blind spots to build better systems indicates investing heavily in purpose-generated qualitative data such as ethnographic studies, contextual observations, and curated sociological findings. Valuable insights can come from methods that capture real-life, context-rich interactions beyond the direct influence of consumerism. Armed with this goal-focused content, AI can begin to internalize the diverse and frequently counterintuitive ways humans engage with the world, providing essential balance.

To ensure AI systems are trained on meaningful data, we should continue to optimize them to incorporate and use unstructured data more effectively. This will require AI models that can contextualize human behavior. Some research suggests that combining big data with smaller, qualitative datasets—often called data triangulation—can help AI systems better reflect the richness of our experiences.

Generative Qualitative Research for AI

Qualitative data is used today to augment and refine AI models; this is not new. However, this unstructured data (an estimated 80% of all content available) isn’t purpose-built for the role. As discussed, it is subject to the same biases and blind spots as other data. Qualitative data requires significant human effort to clean, code, and curate. A partial solution currently being explored uses a hybrid model, where researchers use AI tools to help codify the data. The importance of human oversight cannot be overstated in this effort.

A complementary and amplifying approach to mining our unstructured data treasure trove is to conduct targeted generative studies that can be more heavily weighted to assist AI models in deciphering the raw data they consume. Rigorous methodologies and research objectives based on collectively defined goals can ensure that specific concepts and dynamics the AI model is trained on are conveyed in a way that requires less human oversight in the long term.

A framework for generative research for AI model training can and should be defined. Methods used in traditional software systems today can provide insight. Foundational principles for current user research are one-on-one interviews (vs. focus groups) and a heavy focus on observational and contextual methods. When properly structured, implemented, and analyzed by people with expertise in human-centered design, this type of research can be highly directional and capture a wide range of targeted, nuanced insights. The results of these studies, codified with the help of AI and weighted by humans, could provide robust training benchmarks.

As we shape the future of AI, let’s move beyond technological optimism or doomism, roll up our sleeves, and work to address our industry’s complex legacy. The time for developing mutually defined goals for human-centered design in AI is now. We have a choice: allow this technology to evolve only to reflect what’s commercially viable or intervene to ensure AI represents us authentically. Answers lie in investing in richer data, focusing on human experience, and refusing to let AI reinforce a flaw that has plagued our digital spaces for decades.

Would you like to learn more about human-centered design and AI? Check out the following materials:

About the Author
Dorothy is a digital strategist and researcher who works with companies to blend human-centered research with emerging technologies to navigate complex challenges.

Feeding Two Wolves: How Balancing Creativity and Focus Fuels Innovation and Mastery

Updated 9/29/25 for clarity and development of key concepts.

There’s an old parable about two wolves living inside us. One represents our darker nature, the other our better instincts. The lesson: the wolf you feed is the one that thrives. But when it comes to innovation and mastery, I’ve found the real challenge isn’t choosing which wolf to feed. It’s learning to feed both.

In my work, those two wolves represent divergent and convergent thinking. Divergent thinking explores possibilities without constraint. Convergent thinking brings discipline and execution. Most of us naturally favor one over the other. That imbalance shows up in how we work, how teams function, and ultimately, in what gets built.

The Atelier Method and Innovation

Several years after art school, I returned to study master draftsmanship through the traditional atelier model, specifically the Grand Central Atelier methods. This classical approach emphasizes precision, observation from life, and technical rigor. It might seem disconnected from digital strategy work, but the parallels are telling.

My painting American Music (inspired by the Violent Femmes song) began with pure exploration. I didn’t know what the piece would become. I started with the song’s energy and let the concept reveal itself through sketching and experimentation. That’s divergent thinking: generating possibilities, following intuition, staying open to what emerges.

But once the concept crystallized, everything shifted. The next months were convergent work: hours of precise execution refining tones, adjusting shadows, capturing subtle shifts in light. Every brushstroke required focus and technical control. The creative freedom that sparked the idea gave way to disciplined execution.

This same pattern plays out in every innovation project. Early stages require divergent thinking: brainstorming, prototyping, exploring what’s possible without immediately judging feasibility. Then comes the shift to user research, data analysis, strategic planning. The wild ideas get refined into something that can actually ship.

Why Balance Is Hard

Most people get stuck on one side. Some teams generate endless ideas but never execute. Others jump to execution before exploring alternatives. They optimize the first solution they find rather than the best one.

I work with teams to move fluidly between these modes. Early in a project, we deliberately create space for divergent thinking. We encourage ideas that might seem unfocused but reveal valuable directions. Then we shift intentionally to convergent work, refining those insights into actionable strategy.

When it works, the two modes compound. Disciplined execution often sparks new creative insights. Exploring edge cases during implementation reveals better approaches. The wolves feed each other.

The Research Limitation

This balance matters because of a fundamental constraint most companies miss: you cannot research your way to breakthrough innovation.

Research is invaluable for understanding user needs and validating market fit. But research is inherently backward-looking. It tells you what people know they want based on what they’ve experienced. Groundbreaking products emerge from creative leaps beyond what users can articulate, combined with disciplined execution that makes those visions real.

Apple didn’t research their way to the iPhone. They made a creative bet about how people would want to interact with technology, then executed with obsessive focus on details. The research came later, validating and refining the core vision.

This is why companies staffed with smart people doing rigorous research still produce incremental products. They’re feeding only the convergent wolf, the one that optimizes known problems with proven methods. The divergent wolf that generates genuinely new possibilities is starving.

Progress requires both: research and data to ground ideas in reality, plus creative vision to push beyond what’s currently known or proven.

Mastery Works the Same Way

This pattern extends beyond product development to any pursuit of mastery. In classical realism, you don’t simply copy what you see. You interpret, bringing both technical precision and creative understanding to the work.

The process moves between modes constantly. You start with creative freedom, exploring how to approach the subject. Then you focus intensely on technical execution: mixing exact colors, capturing precise values. But during that focused work, you often discover new possibilities. A technique you practiced for realism suddenly suggests an unexpected creative direction.

That’s where divergent and convergent thinking integrate naturally. Not as opposing forces, but as complementary capabilities that strengthen each other through practice.

Application

If you recognize yourself getting stuck in one mode:

Too much divergence (endless ideation, nothing ships):

  • Set concrete decision points: “We explore until X date, then commit”
  • Define what “good enough” looks like before you start
  • Build in forcing functions that require convergence

Too much convergence (optimizing the first solution):

  • Deliberately schedule divergent thinking time early in projects
  • Use creative constraints that force novel approaches
  • Reward exploration, not just execution

For teams:

  • Make the mode shifts explicit: “We’re in divergent mode until Friday”
  • Don’t judge divergent ideas by convergent criteria (and vice versa)
  • Staff projects with people who can operate in both modes, not just their preferred one

The goal isn’t perfect balance. It’s fluidity—knowing when to explore, when to focus, and how to let each mode inform the other.


Like to learn more about divergent and convergent thinking in technology innovation? Check out the following materials:

The Synergy of Diverge and Converge in Design ThinkingVoltage Control: How divergent and convergent thinking are essential in the design thinking process, offering practical tips on how to integrate these approaches into innovative projects.

Unleash the Unexpected for Radical InnovationMIT Sloan Management Review: Explores how radical innovation often emerges from unexpected ideas, highlighting the importance of environments that encourage divergent thinking and creativity. (May require subscription)

The Design of Everyday ThingsDon Norman: A classic UX book, essential for understanding design thinking, usability, and balancing creativity with practical application in product design.


About the Author
Dorothy is a digital strategist and researcher who helps companies turn big ideas into real-world innovations. Outside of work, she is applying this same balance of creativity and focus to her current pursuit of master draftsmanship.

Case Study: Streamlining Carton Tracking Workflows for Enhanced Efficiency

How do you transform a legacy logistics system into an efficient, user-centered platform without major infrastructure overhauls?

PCSTrac, a carton tracking solution for online retailers, had proven its value in core functionality but was hindered by a cumbersome user interface and workflow inefficiencies. Although technically robust, users often relied on time-consuming workarounds that slowed down operations. My role was to collaborate with senior engineers to bring a user-centered perspective, aligning design improvements with the existing technical constraints of the legacy platform.

Through contextual inquiries, persona development, and usability testing, I led the research that informed both workflow optimizations and interface redesigns. The project resulted in more streamlined processes that significantly improved user satisfaction and system usability. Additionally, the PCSTrac team embraced user-centered design principles, applying these to future product development.

Results Snapshot:

Balanced Technical Constraints and User Needs: Despite the limitations of the legacy system, the user-centered design approach introduced practical solutions that enhanced the user experience without necessitating expensive system overhauls.

Increased Efficiency through Streamlined Workflows: By addressing key workflow bottlenecks, the system allowed users to eliminate redundant manual processes, leading to measurable improvements in operational productivity within 3PL operations.

Improved User Satisfaction and Task Efficiency: The redesigned interface, with a cleaner and more intuitive layout, led to increased satisfaction among users who had previously been frustrated by the outdated interface. This improvement enhanced task efficiency and reduced friction in daily operations.


Looking for more insights? Use this download form to access my collection of detailed case studies, showcasing impactful results across industries. Discover how I’ve tackled complex challenges with innovative solutions.

Case Study: Uncovering New User Segments for Comcast’s WiFi On Demand Service

How did unexpected user insights help Comcast re-engage customers and improve product workflows?

Comcast’s WiFi On Demand service experienced unexpected growth among various user demographics, including low-income users and former subscribers. Originally designed for business travelers, our research revealed that many users viewed the service as an alternative to traditional home internet, especially during life transitions. These insights provided Comcast with an opportunity to re-engage these customers with targeted payment plans, as well as enhance the product’s application interface to address key pain points and workflow inefficiencies.

Through buyer interviews, personas, user journey mapping, and surveys, I led a comprehensive research effort to understand these diverse behaviors and pain points. This informed strategies to re-engage key user segments and improve the interface design, ultimately increasing recurring pass purchases and expanding the service’s reach across underserved demographics.

Results Snapshot:

  • Re-engaged Former Customers: Comcast developed targeted re-engagement strategies for lapsed customers who had switched to daily passes due to overdue bills. This included customized payment plans aimed at bringing them back into monthly subscription services.
  • Enhanced Workflow and UI Efficiency: Direct improvements to the product’s user interface and workflow design were made based on the pain points identified during user research. These changes optimized the service’s overall usability and reduced friction in the ordering process.
  • Increased Revenue from Underserved Demographics: Insights revealed untapped customer segments, leading to increased recurring pass purchases from previously overlooked user groups, including recent movers and transient users.

Looking for more insights? Use this download form to access my collection of detailed case studies, showcasing impactful results across industries. Discover how I’ve tackled complex challenges with innovative solutions.