Exploring Generative AI Leads Me to One Conclusion: Human-Centered Design Matters More Than Ever

AI discussions span everything from utopian promises to technical deep dives. As someone whose career has centered on observing how humans interact with technology, I’ve witnessed how our systems shape—and are shaped by—our lives. Recently, as a strategist and researcher, I’ve been more deeply exploring generative AI’s potential.

One realization stands out: We urgently need human-centered design (HCD) and established user-centered methods to steer AI’s broader societal impact. By adapting UX principles and embedding them into the foundations of these models, we can limit further entrenching existing divides, biases, and superficial interactions. This article isn’t a technical exploration of generative AI; it’s an open invitation to rethink how we might design it with greater intention.

It’s tempting to believe that AI could be humanity’s next great leap forward—a technology that will profoundly and ethically enhance our lives. In a perfect world, AI would evolve through global cooperation, with governments and organizations collaborating to set agreed-upon boundaries in our long-term interdependence with this technology.

Let’s face it, that’s not going to happen.

AI is already deeply enmeshed in profit-driven models, and the commercial forces pushing AI’s rapid development are unlikely to yield a blue-sky vision of AI as a mission-driven, human-first global endeavor. We live in a world where technology will be monetized as quickly as possible, even if it’s still half-baked, and the long-term consequences still need to be considered.

So, what do we do? Our challenge becomes how to guide generative AI’s development without totally abandoning human values while acknowledging its commercialization.

Given this reality, we start by changing the discourse around AI to embrace its potential and risks instead of sensationalizing them. As we develop these systems, we must ensure their training captures and nurtures the best aspects of who we are. To do that, we must create a common framework for what these “best aspects” are while acknowledging that this will include a wide range of relative truths.

Exploratory research asks big questions. Are we taking necessary steps to ensure AI is optimized to enhance humanity’s richness and complexity, or are we reducing ourselves to commodified data points that reflect only the most sensationalized aspects of our behavior? This question should be at the forefront to avoid repeating mistakes.

Digital’s Complex Legacy

From my early work in broadband development to today, I’ve watched well-intended, beneficial advancements also lead to long-term, harmful consequences. Much like current promises about AI, broadband was initially seen as a democratizing force—a tool to empower and connect. High-minded ideals and a utopian vision drove the people who built these services. I know as I was one of those people. I’ve struggled with the pride of being part of an exciting time in our technological history and disillusionment with how some aspects have turned out. As commercial interests took over, particularly with adopting the ad-driven model, the Internet’s original promise was transformed.

Many of the unintended consequences we are dealing with now were foreseen by many, but it didn’t matter; the dot-com bubble burst created an existential crisis for the industry. It forced profit-or-perish decisions that shaped much of today’s Internet, turning hopeful optimism into a commercial necessity. The same unchecked optimism and as much doomsday prediction now surround AI. Might we use AI development as an opportunity to address the negative aspects of our past technological choices head-on?

Due to how broadband shifted to survive, generative AI has been given a complex inheritance. Profit-optimized content has been the foundation for its simulated understanding of human interaction.

Shallow Data In, Shallow Interactions Out

Too often, technology has forced us to adapt to its limitations rather than expand ours. An attempt to address this was in early advocacy for natural language interfaces—aligning systems with how people organically communicate and making interactions easier rather than forcing users to conform to rigid, efficiency-driven workflows. AI should be no different, especially given its potential to be deeply woven into our daily lives.

Today, most AI systems are built on data from platforms like Google, Facebook, and TikTok. These platforms prioritize engagement, rewarding the most attention-grabbing content, not necessarily the most meaningful. While vast and seemingly comprehensive, it represents inaccurate and incomplete versions of ourselves. Consider for a moment those sources. Do they reflect our goals as a species?

Some AI proponents believe techniques like cross-domain learning and transfer learning—where AI systems are trained on data from multiple domains—can mitigate biases and data gaps. These techniques can improve AIs’ ability to handle more complex tasks. However, these technology-focused solutions are band-aids unlikely to address the underlying design failure: the data itself.

One of my favorite tech dad jokes came from a colleague at a systems integrator: “How did God create the universe in 7 days? There was no legacy system.”

I’m passionate about technology, but we must be pragmatic in its implementation, and the devil is in the details. Surrounded by the hype of AI, it’s easy for tech leaders to forget: Legacy systems have taught us once systems are implemented, they tend to persist, flaws and all. Retooling from scratch is costly and time-consuming, meaning the foundations of today’s AI systems will continue to shape tomorrow’s world. Without a focused effort to integrate context-rich, qualitative information, we risk building AI systems that fail to enrich our lives, limit how we engage with the world, and codify narrow representations of ourselves well into our children’s futures.

We’ve Already Adapted to Technology’s Distorted Lens

Seemingly insignificant behaviors can reveal profound insights that broad datasets miss. Hesitations, pauses, or other external behavioral changes could indicate a point in a workflow that inherently requires a higher cognitive load or a genuine barrier to completing a task—big data, which typically only captures outcomes, fails to detect this. There is an even more concerning fact: users often adapt to poorly designed system experiences and believe their struggles stem from their inadequacies rather than the technology itself.

We’ve watched technology move beyond being a tool we adapt to in small ways into a force that shapes our culture in significant ways we don’t control, eroding our collective feeling of agency. Narratives pushed by the algorithms behind these platforms don’t just diminish our senses of self-worth and reality but also encourage consumerism, impulsivity, and emotional reactivity. As AI systems inherit data from these platforms, they reinforce these interactions, codifying hyper-consumerism, performative relationships, and attention manipulation.

Our lived experiences—unique, messy, and deeply meaningful—are being flattened, standardized, and spit back at us in increasingly distorted forms. Not only are we facing a loss of biodiversity but also of human diversity, and it’s unfolding on our watch.

Correcting Course

At the heart of AI’s current development lies a mismatch between its commercial incentives and society’s broader needs. The race to deploy AI to boost stockholder confidence undermines its real potential. Yet, we can correct course, making conscious choices to align AI’s evolution with our better angels.

Pivoting to a Long-Term Investment Model

We should adopt a long-term investment model that encourages careful consideration before releasing AI technologies to the public instead of using us as experiments and training fodder. It’s tempting to think that if we need more data, we can simply allow users to provide it in real-time. However, this data is a feedback loop of simplified tasks and interactions, and I question the ethics of this approach.

Companies can still find a win-win. Jeff Bezos’s well-known approach to Amazon is a powerful example of this kind of long-term investment thinking. From the outset, Bezos emphasized long-term infrastructure over immediate profits, understanding that the company could grow exponentially by building the necessary foundation.

Ethical Walled Gardens

Another crucial component of redirecting AI’s future is the concept of ethical walled gardens. “Walled gardens” often mean monopolistic ecosystems controlled by tech giants like Google or Facebook, but an ethical walled garden can have different purposes. Secure, gated spaces where clear moral principles and safeguards govern AI development and deployment.

Privacy-focused laws, such as the European Union’s General Data Protection Regulation (GDPR), help protect user privacy and create secure spaces for data collection. However, they will not inherently address the quality or completeness of AI training data. While these frameworks safeguard data from exploitation, they must be paired with strategic efforts to improve the collected data.

The Value of Qualitative Data

Addressing AI’s inherited blind spots to build better systems indicates investing heavily in purpose-generated qualitative data such as ethnographic studies, contextual observations, and curated sociological findings. Valuable insights can come from methods that capture real-life, context-rich interactions beyond the direct influence of consumerism. Armed with this goal-focused content, AI can begin to internalize the diverse and frequently counterintuitive ways humans engage with the world, providing essential balance.

To ensure AI systems are trained on meaningful data, we should continue to optimize them to incorporate and use unstructured data more effectively. This will require AI models that can contextualize human behavior. Some research suggests that combining big data with smaller, qualitative datasets—often called data triangulation—can help AI systems better reflect the richness of our experiences.

Generative Qualitative Research for AI

Qualitative data is used today to augment and refine AI models; this is not new. However, this unstructured data (an estimated 80% of all content available) isn’t purpose-built for the role. As discussed, it is subject to the same biases and blind spots as other data. Qualitative data requires significant human effort to clean, code, and curate. A partial solution currently being explored uses a hybrid model, where researchers use AI tools to help codify the data. The importance of human oversight cannot be overstated in this effort.

A complementary and amplifying approach to mining our unstructured data treasure trove is to conduct targeted generative studies that can be more heavily weighted to assist AI models in deciphering the raw data they consume. Rigorous methodologies and research objectives based on collectively defined goals can ensure that specific concepts and dynamics the AI model is trained on are conveyed in a way that requires less human oversight in the long term.

A framework for generative research for AI model training can and should be defined. Methods used in traditional software systems today can provide insight. Foundational principles for current user research are one-on-one interviews (vs. focus groups) and a heavy focus on observational and contextual methods. When properly structured, implemented, and analyzed by people with expertise in human-centered design, this type of research can be highly directional and capture a wide range of targeted, nuanced insights. The results of these studies, codified with the help of AI and weighted by humans, could provide robust training benchmarks.

As we shape the future of AI, let’s move beyond technological optimism or doomism, roll up our sleeves, and work to address our industry’s complex legacy. The time for developing mutually defined goals for human-centered design in AI is now. We have a choice: allow this technology to evolve only to reflect what’s commercially viable or intervene to ensure AI represents us authentically. Answers lie in investing in richer data, focusing on human experience, and refusing to let AI reinforce a flaw that has plagued our digital spaces for decades.

Would you like to learn more about human-centered design and AI? Check out the following materials:

About the Author
Dorothy is a digital strategist and researcher who works with companies to blend human-centered research with emerging technologies to navigate complex challenges.

Shine Alpha FAQs

Everything you need to know about our alpha program

About Shine

What is Shine? Shine is a growth platform that helps experienced professionals navigate career transitions and strategic positioning through structured diagnostic inquiry. We’re not an “AI tool” any more than we’re a “database tool”. What makes Shine work is decades of strategy and research methodology expertise embedded into the architecture, guiding you through evidence-based self-discovery.

How is Shine different from other career tools? Most career tools either give you templates (resume builders, LinkedIn optimizers) or act like all-knowing oracles (AI that makes broad assumptions). Shine investigates first, gathers evidence, and surfaces patterns you might not have articulated to build a strategy based on what’s true about your experience. The diagnostic rigor comes from strategic consulting, not generic AI model predictions based on the most common use cases. 

What does “growth platform” mean? Shine focuses on your trajectory, not just your current state. We illuminate where your capabilities are positioning you, what needs development to reach your next solid step forward, and how to build on authentic strengths rather than fix perceived weaknesses (see HCAI). It’s about guided growth through structured reflection and strategic frameworks, not quick fixes or motivational platitudes. Growth happens when you understand the terrain clearly enough to navigate it deliberately.

Diamonds are created in pressure. 
Craft makes them shine. ✨

↑ Back to top

What is Human-Centered AI (HCAI)?

Human-Centered AI is an approach to AI development that prioritizes human agency, transparency, and collaboration over automation or replacement. Instead of positioning AI as an authority that provides answers, HCAI treats AI as a tool that amplifies human judgment, surfaces options, and helps people make better-informed decisions while maintaining control over outcomes.

Shine embodies HCAI principles by acting as a strategic thinking partner, not a directive coach. We don’t tell you what to do. We help you see patterns, challenge assumptions, and explore possibilities. You maintain full agency. The AI handles structured inquiry and pattern recognition, while you bring context, judgment, and decision-making. This collaboration produces better outcomes than either human or AI could achieve alone. Also, please see the Privacy & Security section. As a platform with a philosophy, Shine will never be extractive.

↑ Back to top

The Alpha Program

What’s included in the alpha?

Right now, alpha participants get access to Phase 1: Discovery & Analysis. This includes:

  • Structured diagnostic conversation (4 topics exploring your differentiation, goals, challenges, and patterns)
  • Market research on your target roles
  • Three deliverables: Situational Analysis, Process Roadmap, and an AI Chat Prompt you can use with other tools

What’s NOT included in the alpha?

Phase 1 is a diagnostic teaser and conversion tool—it demonstrates how Shine thinks and surfaces 2-4 strategic themes worth exploring. It does NOT include:

  • Complete diagnosis of all positioning issues
  • Execution-ready materials (resume rewrites, LinkedIn optimization, outreach templates)
  • Ongoing coaching or accountability
  • Full networking strategy development
  • The complete Shine process (Phases 2+) will be available after alpha

How long does Phase 1 take?

The diagnostic conversation typically takes 30-45 minutes depending on how much detail you share. The final analysis and report generation takes about 2 minutes.

What happens after I complete Phase 1?

You’ll receive your full analysis report with three sections:

  • Situational Analysis: The strategic themes and patterns we identified
  • Process Roadmap: How the full Shine process would build on this foundation
  • AI Chat Prompt: A portable summary you can use to continue strategic conversations with any AI tool

You’ll also have the option to provide feedback and express interest in continuing with the full Shine process when it launches.

↑ Back to top

How It Works

What information do I need to provide?

You’ll complete a brief intake form with:

  • Your current role, industry, goal and objectives
  • Your biggest frustration or challenge
  • Communication preferences
  • Your resume (PDF format, doesn’t need to be current)

Does my resume need to be up-to-date?

No! Shine uses your resume to understand your full career context, not to evaluate formatting or recency. A resume from a few years ago works fine. We just need to see your experience breadth.

Even if you’ve stayed in the same role for years, Shine can surface strategic patterns. The conversation provides the depth; the resume provides the scope. Don’t let “I need to update my resume first” delay you. Upload what you have.

Do I need to upload my resume?

Yes, in PDF format. Shine uses your resume to understand your career context and experience breadth.

Can I stop and resume later?

Yes. Your conversation is saved automatically, so you can leave and come back anytime. Just log back in to continue where you left off.

That said, the diagnostic works best when completed in one sitting (30-45 minutes) while your thoughts are fresh. But, the conversation is intensive and requires thoughtful introspection, if you need to step away, your progress is saved.

↑ Back to top

The Process

What are the 4 topics in Phase 1?

  • Differentiation Evidence: Where you’ve operated at your best
  • Career Direction Clarity: What your goals actually look like in concrete terms
  • Challenge & Blocker Discovery: Times things didn’t go as planned and what that reveals
  • Pattern Validation: Confirming the themes we’ve identified together

What if I don’t know how to answer a question?

That’s normal and useful information! Shine will note areas of uncertainty and either probe differently or mark them as “hypothesis” in your analysis. Not having perfect clarity is part of why you’re here.

Will Shine tell me what to do?

No. Shine illuminates patterns, surfaces contradictions, and shows you the terrain. You make the decisions. We’re radically committed to user agency—we’ll challenge assumptions and push back on misalignments, but we never replace your judgment.

What if my goals conflict with my evidence?

We’ll surface that contradiction openly. For example, if you say “I want VP roles” but describe wanting IC work, we’ll name that disconnect and help you explore which is actually true. Accurate diagnosis serves you better than false encouragement.

↑ Back to top

Privacy & Security

Is my data private and secure?

Yes. Your conversations, resume, and all career information are encrypted and stored securely. I cannot view your messages or data without explicit written consent. There’s no “admin dashboard” where I can see conversations. Accessing your information requires directly querying the database (and a significant amount of time to piece all the different data points together), which I only do with your permission for support or feedback purposes.

Will you view or share my conversations or content?

Absolutely not. Your conversations are completely private. I will never share quotes, or use your information in marketing materials, case studies, or any public content without your explicit written permission.

As an alpha tester, I may ask if you’d be willing to share feedback or anonymized insights to improve Shine, but this is always optional and requires your clear consent first.

What data do you collect and why?

We collect:

  • Your conversations and responses (to provide the Shine service)
  • Your resume and career information (for strategic analysis)
  • Usage data (to improve the platform)

Your data is yours. It will never be shared, sold, or used for any purpose beyond providing you the Shine service without your explicit written consent.

Can I export or delete my data? Yes. During alpha, data export and account deletion require manual database work, which takes several hours. There will be a processing fee to cover this time. We’re building self-service export and deletion features. User control over your data is a core HCAI principle, not an afterthought.

 

↑ Back to top

Technical & Logistics

What technology does Shine use?

Shine is built on Claude (Anthropic’s AI), customized with extensive system instructions that enforce our structured process, evidence-based approach, and growth-oriented framing.

Can I edit my responses after submitting?

Not during alpha. The diagnostic process is designed to capture your authentic, in-the-moment responses. Overthinking or editing defeats the purpose of pattern recognition.

What if I’m not satisfied with my analysis?

Alpha participants will have the opportunity to provide feedback. If your analysis missed the mark, we want to know why—that helps us improve the platform. You can also request a debrief to discuss your results.

↑ Back to top

Costs & Commitment

Is the alpha free?

Yes. Phase 1 is free during alpha in exchange for your feedback and participation in occasional user research debriefs.

What will Shine cost after alpha?

Pricing isn’t finalized yet, but our goal is to make the full Shine process accessible to experienced professionals navigating transitions. We’ll never use extractive scare tactics or pressure-based sales.

Am I committing to anything by participating?

No. Alpha participation is voluntary, and there’s no obligation to continue beyond Phase 1 or provide feedback (though we hope you will!).

↑ Back to top

Who Shine Is For

Who is Shine designed for?

Experienced information workers (5+ years) navigating career transitions, positioning challenges, or strategic growth. People who value objectivity, respect, and evidence-based approaches over motivational fluff.

Who is Shine NOT for?

  • Early-career professionals still figuring out their baseline skills
  • People looking for quick resume templates or LinkedIn hacks
  • Anyone wanting a coach to tell them exactly what to do
  • People uncomfortable with direct feedback or honest pattern recognition

Do I need to be actively job searching?

No. Shine helps with strategic positioning, career direction clarity, and professional growth—whether you’re job searching, preparing for promotion conversations, or figuring out what’s next.

↑ Back to top

Getting Started

How do I join the alpha?

Email dorothy@danforth.co

What happens after I request access?

You’ll receive an email with:

  • Link to complete your intake form
  • Instructions for starting your Phase 1 diagnostic
  • Timeline for what to expect

↑ Back to top

The Price of Short-Term Thinking

Every strategist has lived through this moment: you run deep generative research, flag risks, recommend caution, and watch the company barrel ahead anyway. Months later, the fallout is exactly what you predicted.

The recent enforcement of a $25 minimum budget per job posting on Indeed is one of those moments. It may look like a small tweak in pricing. But to those in the field (recruiters, agencies, programmatic buyers), it’s a blunt instrument that strips away flexibility, inflates costs, and erodes trust.

What Changed

Until mid-2025, employers using sponsored jobs on Indeed could average their budgets across multiple postings. A campaign budget of $250 might cover ten jobs, but recruiters could rotate postings in and out based on weekly priorities.

That flexibility ended July 1, 2025. From that date forward, every job ID required its own $25 minimum allocation. Add a new job to an existing campaign? That’s another $25. Remove one and replace it? Another $25.

Strategy vs Tactics

Here’s the uncomfortable truth: this isn’t “strategy.” It’s revenue desperation dressed up as discipline.

Strategy is about positioning, resilience, and durable advantage. Tactics are the levers you pull for a mid-strategy correction or quarterly bump. 

Consider the context: Post-COVID hiring volumes from 2021-2022 were never sustainable. The market cooled, volumes normalized. This wasn’t a shock; it was obvious. Rather than prepare investors for a cyclical downturn, Indeed chose to squeeze customers for near-term revenue.

Meanwhile, recruiters drowning in applicants needed agility, not more friction. Instead, the platform raised barriers and eroded goodwill.

Strategic Path Not Taken

A strategically focused company would have prepared both sides of the house (customers and shareholders) for the inevitable cycle. Instead of implementing revenue-extraction tactics, they could have invested in operationalizing AI in ways that entrench buyer operations and build loyalty.

What “operationalizing AI” could have meant:

  • Budget optimization assistants: AI tools that help recruiters dynamically allocate spend across jobs, based on performance signals and hiring urgency
  • Job rotation intelligence: Instead of punishing rotation, AI could automate it by pausing low performers, boosting high performers, surfacing recommendations
  • Candidate-fit triage: Reducing recruiter screening burden with models that prioritize quality applicants, especially when application volumes are high
  • Market insights dashboards: AI-powered labor market data to position Indeed as a trusted advisor, not just a job board

These are sticky features. Once recruiters bake them into daily workflows, churn drops.

This is the difference between investing in R&D and executing a stock buyback. One builds long-term capability and customer value, while the other merely manipulates a short-term metric. It is the choice between building for tomorrow and scrambling for today.

Competitive Reality

We’re at the early bubble stage of LLM hype. Investors are rewarding companies for credible AI narratives, not raw revenue grabs. A story of “we are transforming recruiter workflows with AI to capture the next decade of market share” would resonate with both shareholders and customers. Indeed chose to stress-test customer relationships at exactly the moment those customers were most cost-sensitive and sitting on abundant applicant supply. This misstep is magnified by the competitive landscape.

The competitor landscape for Indeed makes its inflexibility even more glaring. LinkedIn operates on pay-per-click models with cost-per-click typically ranging “$1.50 to $4.50 for generic roles,” allowing recruiters to “only pay when someone clicks” and “adjust in-flight” with daily and total budgets. ZipRecruiter uses subscription-based models with “predictable expense for companies hiring continually” and “job slots” for multiple concurrent roles. Both preserve the agility that buyers prize.

What this means strategically: when recruiters hit budget pressure, they now have clear alternatives that offer the flexibility Indeed just eliminated.

Backlash and Risks

On paper, the $25 floor looks like revenue optimization: predictable ARPU per job ID, simplified enforcement, fewer micro-budgets to manage.

The market response has been swift and negative. Across Reddit recruiting forums, industry podcasts, and trade publications, recruiters are expressing frustration. Chad & Cheese industry podcast reported recruiters saying roles that “don’t justify the $25 minimum just won’t run” and planning to “reduce our Indeed spend by 15–20%.” CXR community members are calling the change “monumental,” noting that “post 100 jobs you pay $2,500.” The AIM Group trade publication noted “pushback from recruiters may give a boost to competitors like ZipRecruiter, LinkedIn, niche boards.”

This isn’t just grumbling; it’s budget reallocation. Even a 15-20% spend reduction across thousands of employers compounds quickly.

The risks also compound:

  • Customer goodwill erosion—Recruiters feel trapped and exploited
  • Budget leakage—Even modest reductions in spend per customer aggregate to meaningful revenue loss
  • Brand damage—Once recruiters perceive you as extractive rather than enabling, recovery is slow
  • Competitive opening—More flexible pricing models suddenly look friendlier

Strategy is supposed to anticipate second-order effects. This move ignored them entirely.

Looking Ahead

We’re entering a period where the hype cycle around large language models will cool—likely within 2 to 3 years. Investors will punish companies that used AI as a story without building real, durable customer value.

The winners will be those that are prepared by strengthening customer operations, building loyalty through indispensable workflows, and using AI to entrench themselves in buyer processes.

The Lesson for Strategists

Ultimately, this isn’t a story about a pricing change. It’s a cautionary tale about strategic integrity. The core mandate of strategy isn’t to optimize for the next quarter’s earnings call; it’s to build a system of advantages that compounds over time.

True strategy is the discipline to choose the second option, especially when the first one is so tempting. That is how durable legacies are built, and how market leaders are truly separated from the pack.

Why Your Million-Dollar Strategy is Collecting Dust

You’re digging through your company’s document repository when you find it: a deck from a top consulting firm you didn’t even know had been hired. Inside, the analysis dissects challenges identical to what your team is struggling with today. In fact, you found it while searching for material for a presentation you’re drafting to describe the same problems. When is this from… 2019?

You find more, plus a $2.5 million statement of work. That’s a lot of money for this to be collecting dust. You look more closely at the deck. The content is still relevant, the team appears highly skilled, and the methodologies look solid. What happened? Why has nothing been done with this?

Why does this happen so often?

Why Great Strategies Don’t Get Implemented

Large consulting firms operate on an assembly-line approach. Senior partners diagnose and sell the vision. Mid-level managers translate it into diagrams and proprietary language. Analysts execute data gathering. With each hand-off, context evaporates. Nuance dissolves. Accountability diffuses until no one truly owns the outcome.

A CIO I once spoke with during a cold call crystallized a common bias. “Why hire you?” he asked. “If I hire [Big Four Firm], they’ll back a truck up filled with smart people. I can get stuff done very quickly.” He saw headcount as velocity. I see fragmentation risk. Bodies mistaken for strategic coherence. The reality is that no amount of capacity can move faster than people’s ability to adopt change. This makes effective internal socialization and change management critical.

An oversimplification problem compounds this. A director of product management at a major publisher once asserted to me that he knows a good idea if he can understand it in one sentence. This preference for repeatable methodologies, while operationally sound for firms managing hundreds of engagements, often reduces complex strategic problems to familiar frameworks. The result is lazy thinking: “We’re like Yelp, but for people.” “We’ll use micro-A/B testing, like Amazon.”

To be fair, when stars align, big firms can deliver exceptional results. For example, McKinsey’s partnership with ING Netherlands on its agile transformation is a masterclass in strategic continuity. McKinsey didn’t just deliver a report; they embedded consultants within ING for years. These consultants worked side-by-side with ING’s teams to co-design and co-implement a new operating model, effectively acting as coaches and teachers. This ensured the strategy was not just understood but fully owned and executed by ING’s people, building the internal capability to sustain the new model long after.

But these success stories are the exception due to structural constraints. Research from the firms themselves reveals the pattern: McKinsey finds that roughly 70% of transformations fail to achieve their intended impact, while Bain’s classic analysis estimates that 37% of strategy value is lost between planning and execution due to “significant value erosion” during implementation.

The big brand promise of lower risk is hollow against the industry’s 70% failure rate. Brilliant partners sell the vision, but execution gets handed to less experienced teams while the brand primarily protects the firm, not your success. For mid-market companies especially, this creates catastrophic mismatches in cost, culture, and scale.

A large consulting firm can deliver a “perfect” strategy, but if it was developed in a black box without building buy-in, training, and political capital within the client organization, it will die on contact. The failure of the hierarchical hand-off model is that it is structurally designed to prevent this necessary co-creation. It’s a delivery model, not an integration model.

Embedded Strategic Continuity

Complex problems demand a different approach: embedded continuity. Unlike a static plan, strategy is a dynamic process of adaptation. Market shifts, technical constraints, and user feedback will force pivots. The critical question is whether your core insights survive these changes or get lost in translation.

This requires a highly capable strategist who acts as a teacher, not a hierarchical team. The value isn’t in limiting resources but in strategic coherence. When the same expert mind connects initial research to mid-course corrections, it replaces fragmented hand-offs with coherent ownership, ensuring the core insight not only survives but intelligently adapts.

Value comes from cross-functional fluency that connects domains and stakeholders. This builds internal resilience by coaching teams through complex problems and maintaining strategic coherence from research to execution. The deliverable is not a deck, but a capable team and a living strategy your organization can evolve independently.

This approach is enabled by its focus on outcomes, not utilization. Without the burden of supporting armies of junior consultants, an independent strategist’s incentives are aligned solely with client success. This allows for candid advice.

For example, I once joked to a CEO that he needed to “fix your org structure first” before an expensive intranet. We laughed because it sounded absurd. But his high-growth startup had unclear team ownership. The intranet was a good idea that would have failed at that moment. Because my value wasn’t tied to that specific project, I could point out the real obstacle. We paused, clarified, and launched successfully later. The business model makes honesty possible.

Why an Independent Strategist is a Smart Financial Bet

If both big firms and independent strategists have a chance of failure, the one that costs significantly less presents a far better risk-adjusted return. A failed $2.5M project is a catastrophe. A failed $250k engagement is a learning experience.

The math is simple, but makes the point. For the independent’s expected value to be worse than the big firm’s, the potential gain from success would have to be negative. A skilled strategist who chooses engagements well should achieve a significantly higher success rate than 30% because their entire model avoids the top causes of failure.

The Future of Consulting

The value of large teams for data aggregation and slide production is plummeting. LLMs can now perform first-pass synthesis and drafting in minutes.1 The future value of a strategist lies in high-order judgment, nuanced stakeholder management, coaching, and decision-making under uncertainty. These are precisely the skills championed by the embedded guide model.

These evolving capabilities connect to broader questions about where human expertise creates irreplaceable value. As I’ve explored elsewhere, the most strategic applications of technology require deep understanding of human behavior and organizational dynamics. These particular capabilities can’t be automated away.

The word “strategy” itself has become so diluted it’s applied to everything from banner colors to major pivots. When tactical decisions get elevated to “strategic” status (often because strategy commands more respect and budget), it creates confusion about what genuine strategic thinking actually involves. Real strategy isn’t about optimizing for next quarter. As I examined in The Price of Short-Term Thinking, it’s about building systems of advantage that compound over time.

Choosing the Right Approach

The consulting model must be matched to the problem, but you have more options than big firm versus independent consultant.

Internal resources make sense when you have available strategic talent with bandwidth for deep thinking. Current market realities complicate this. Many organizations lost senior strategic expertise during recent workforce adjustments, creating capability gaps where strategic work gets assigned to people already managing operational responsibilities.

Hiring from the now large available talent pool is another option. However, a professional consultant brings a distinct advantage: they are specialists in the art of external problem-solving, equipped with proven frameworks to navigate complexity from day one. Seasoned, dyed-in-the-wool strategy consultants excel at hyper-discovery, can provide stepped results immediately and avoid the pitfalls of large-scale hand-offs.

Stop Building a Strategy Graveyard

Complex challenges need guides, not decks. The goal is to build something vital with clarity and resilience, growing your capability, not your document folder.

When you commission strategy work, consider whether you’re building internal capability to navigate complexity or creating another deliverable. The embedded continuity approach focuses on developing organizational resilience that continues after the engagement ends.

Ready to build capability? The conversation starts with diagnosing whether your challenge needs strategic coherence or analytical horsepower—and choosing your approach accordingly.


This approach isn’t theoretical:
As an embedded guide, I coached senior Merck product and marketing leaders through an adapted design thinking process. Together, we moved from patient interviews to tested concepts, building the team’s capability while identifying digital health opportunities. The result wasn’t just a report; it was a patient-centered strategy and a more innovative team. Take a look: Merck’s Chronic Illness Research


References

  1. How to beat the transformation odds.” McKinsey & Company 2015.
  2. Leading organizations…” McKinsey & Company 2017.
  3. The agile at scale paradox.” Harvard Business Review 2018.
  4. ING’s agile transformation” McKinsey & Company 2017.
  5. Management Tools & Trends 2023.” Bain & Company 2023.

  1. While LLM capabilities for synthesis continue advancing rapidly, current limitations in context, judgment, and stakeholder dynamics require human oversight for strategic applications. ↩︎

About the Author
Dorothy is a digital strategist who helps companies navigate complex challenges through embedded, continuous partnerships. She focuses on building internal capability and living strategies, ensuring insights survive from initial research through execution.

Stop Using Unattended LLMs for Candidate Screening. Do This Instead.

The other day I saw a post about using LLMs for candidate screenings, i.e. fully unattended systems, no human-in-the-loop. As in many current discussions about AI, opinions seemed to split into for and against. As someone who has researched and written about human-centered AI usage, I thought this was a good opportunity to explore issues specific to AI in hiring. I did, after all, work a stint at Indeed.

Let me be clear. I am not against using LLMs in hiring. I’ve been a technology consultant for *checks notes* a million years. I have witnessed promising new technologies applied in ways that ultimately limited their potential to serve us and improve our lives. The goal is smart, human-focused application. This means moving beyond the hype and fear to identify sustainable, win-win use cases that serve both organizations and candidates.

The current trend of deploying unattended LLMs for first-tier screenings is a strategic misstep. Even in an employer-favorable market, it risks brand damage and candidate alienation. When the market inevitably swings back, this practice will become active poison for talent acquisition. A 2025 Harris Poll shows 84% of candidates prefer a human for the initial screen, and Pew Research found 66% would not apply to a company using AI in hiring decisions. This isn’t sustainable innovation; it is a short-term gambit with long-term consequences. No bueno.

It is all well and good to critique what is broken. Real value, however, lies in envisioning what comes next. So let’s ask the question: what might a more human-centered AI collaboration in hiring actually look like?

The shift required is fundamental. We must stop asking how AI can replace humans and start designing how it can augment them. Consider the common frustrations on both sides of the interview table. A candidate often struggles to decode the true priorities hidden within a job description. A hiring manager, juggling endless meetings, may have only glanced at the résumé before the call. This creates an inefficient dance where both parties waste time simply establishing basic alignment.

This is where a transparent AI copilot shows promise. Imagine a system that analyzes the résumé against the job description before the interview, surfacing qualitative matches (not just keywords) and potential gaps. That analysis is then shared on screen with both the candidate and the hiring manager at the start of the conversation. The LLM provides suggested talking points or areas for clarification that the hiring manager can use or overrule. It becomes a collaborative tool. The hiring manager gets cognitive relief to lead a more focused discussion. The candidate gains agency to clarify or expand on points in real-time. The AI isn’t the judge; it’s the moderator ensuring both parties start on the same page.

Something like this was once logistically challenging, but post-COVID, the ubiquitous nature of remote meetings makes an AI-assisted first-tier interview a viable implementation. If successful, the candidate can progress to more open-ended rounds in person or remotely. I don’t propose this as a baked solution, but as a thought experiment for how human-centered AI might be operationally leveraged. No doubt some companies are already thinking this way or even building it. The core principle is that sustainable application requires transparency and shared benefit for both the hiring manager and the candidate.

The value here is derived from two non-negotiable principles: radical transparency and shared benefit. The hiring manager gains efficiency through reduced cognitive load and sharper focus. The candidate gains agency through visibility into the process and the opportunity to clarify context. This isn’t about replacing human judgment; it’s about enhancing its quality and fairness. This approach is best suited for roles requiring nuanced evaluation, not high-volume, low-complexity hiring where its advantages are diminished. No worries, we can come up with empathetic alternatives for that as well…with a little nuanced thought.

Humans are true experts at that.

The debate around AI in hiring does not need to be a battle between evangelists and skeptics. The most strategic path forward rejects this false dichotomy. We do not need to be uncritical advocates who deploy technology blindly, nor do we need to be reactionary opponents who dismiss its potential entirely. The goal is to thread the needle to enhance efficiency without eroding trust. Good vibes only. 

Final thought from my heart to those struggling in this job market who might be reading this. If you are unemployed, or underemployed, we’re in an objectively soft employment market and AI is now crashing the party. The odds are stacked, but there are many of us in tech working on this. We see you. Take a deep breath. Remember who you are; the same amazing person you were before the market shifted. You are unique, powerful, and you’ve got this. The following song is my encouragement to you. Push on. ??

Feeding Two Wolves: How Balancing Creativity and Focus Fuels Innovation and Mastery

Updated 9/29/25 for clarity and development of key concepts.

There’s an old parable about two wolves living inside us. One represents our darker nature, the other our better instincts. The lesson: the wolf you feed is the one that thrives. But when it comes to innovation and mastery, I’ve found the real challenge isn’t choosing which wolf to feed. It’s learning to feed both.

In my work, those two wolves represent divergent and convergent thinking. Divergent thinking explores possibilities without constraint. Convergent thinking brings discipline and execution. Most of us naturally favor one over the other. That imbalance shows up in how we work, how teams function, and ultimately, in what gets built.

The Atelier Method and Innovation

Several years after art school, I returned to study master draftsmanship through the traditional atelier model, specifically the Grand Central Atelier methods. This classical approach emphasizes precision, observation from life, and technical rigor. It might seem disconnected from digital strategy work, but the parallels are telling.

My painting American Music (inspired by the Violent Femmes song) began with pure exploration. I didn’t know what the piece would become. I started with the song’s energy and let the concept reveal itself through sketching and experimentation. That’s divergent thinking: generating possibilities, following intuition, staying open to what emerges.

But once the concept crystallized, everything shifted. The next months were convergent work: hours of precise execution refining tones, adjusting shadows, capturing subtle shifts in light. Every brushstroke required focus and technical control. The creative freedom that sparked the idea gave way to disciplined execution.

This same pattern plays out in every innovation project. Early stages require divergent thinking: brainstorming, prototyping, exploring what’s possible without immediately judging feasibility. Then comes the shift to user research, data analysis, strategic planning. The wild ideas get refined into something that can actually ship.

Why Balance Is Hard

Most people get stuck on one side. Some teams generate endless ideas but never execute. Others jump to execution before exploring alternatives. They optimize the first solution they find rather than the best one.

I work with teams to move fluidly between these modes. Early in a project, we deliberately create space for divergent thinking. We encourage ideas that might seem unfocused but reveal valuable directions. Then we shift intentionally to convergent work, refining those insights into actionable strategy.

When it works, the two modes compound. Disciplined execution often sparks new creative insights. Exploring edge cases during implementation reveals better approaches. The wolves feed each other.

The Research Limitation

This balance matters because of a fundamental constraint most companies miss: you cannot research your way to breakthrough innovation.

Research is invaluable for understanding user needs and validating market fit. But research is inherently backward-looking. It tells you what people know they want based on what they’ve experienced. Groundbreaking products emerge from creative leaps beyond what users can articulate, combined with disciplined execution that makes those visions real.

Apple didn’t research their way to the iPhone. They made a creative bet about how people would want to interact with technology, then executed with obsessive focus on details. The research came later, validating and refining the core vision.

This is why companies staffed with smart people doing rigorous research still produce incremental products. They’re feeding only the convergent wolf, the one that optimizes known problems with proven methods. The divergent wolf that generates genuinely new possibilities is starving.

Progress requires both: research and data to ground ideas in reality, plus creative vision to push beyond what’s currently known or proven.

Mastery Works the Same Way

This pattern extends beyond product development to any pursuit of mastery. In classical realism, you don’t simply copy what you see. You interpret, bringing both technical precision and creative understanding to the work.

The process moves between modes constantly. You start with creative freedom, exploring how to approach the subject. Then you focus intensely on technical execution: mixing exact colors, capturing precise values. But during that focused work, you often discover new possibilities. A technique you practiced for realism suddenly suggests an unexpected creative direction.

That’s where divergent and convergent thinking integrate naturally. Not as opposing forces, but as complementary capabilities that strengthen each other through practice.

Application

If you recognize yourself getting stuck in one mode:

Too much divergence (endless ideation, nothing ships):

  • Set concrete decision points: “We explore until X date, then commit”
  • Define what “good enough” looks like before you start
  • Build in forcing functions that require convergence

Too much convergence (optimizing the first solution):

  • Deliberately schedule divergent thinking time early in projects
  • Use creative constraints that force novel approaches
  • Reward exploration, not just execution

For teams:

  • Make the mode shifts explicit: “We’re in divergent mode until Friday”
  • Don’t judge divergent ideas by convergent criteria (and vice versa)
  • Staff projects with people who can operate in both modes, not just their preferred one

The goal isn’t perfect balance. It’s fluidity—knowing when to explore, when to focus, and how to let each mode inform the other.


Like to learn more about divergent and convergent thinking in technology innovation? Check out the following materials:

The Synergy of Diverge and Converge in Design ThinkingVoltage Control: How divergent and convergent thinking are essential in the design thinking process, offering practical tips on how to integrate these approaches into innovative projects.

Unleash the Unexpected for Radical InnovationMIT Sloan Management Review: Explores how radical innovation often emerges from unexpected ideas, highlighting the importance of environments that encourage divergent thinking and creativity. (May require subscription)

The Design of Everyday ThingsDon Norman: A classic UX book, essential for understanding design thinking, usability, and balancing creativity with practical application in product design.


About the Author
Dorothy is a digital strategist and researcher who helps companies turn big ideas into real-world innovations. Outside of work, she is applying this same balance of creativity and focus to her current pursuit of master draftsmanship.

Case Study: Shaping Crypto Innovation for Schwab

How can a trusted financial institution like Schwab innovate in the rapidly evolving world of cryptocurrency while maintaining its credibility and regulatory alignment?

As Schwab’s leadership grappled with how to approach the crypto market, they were facing substantial outflows, losing hundreds of millions of dollars each month as investors turned to competitors for crypto investments. They needed a strategy that would recapture these investors while preserving their reputation for trustworthiness and regulatory adherence. I led research for an initiative within Schwab’s Innovation Accelerator to explore whether, how, and to what extent they should enter the crypto space. Through in-depth market analysis, investor research, and strategic ideation, we developed a path forward to provide Schwab with a competitive edge without undermining the brand’s core values.

In collaboration with the Innovation Accelerator team, I provided crucial decision-making insights, revealing the mindsets of both existing Schwab clients investing in crypto elsewhere and crypto-focused investors not yet using Schwab. Our evaluation of key competitors such as Fidelity and Robinhood informed a risk-reward analysis for various entry strategies. Beyond a direct crypto offering, we identified a valuable opportunity for Schwab to enhance its position by providing advanced decision-making tools—offering investors clarity while aligning with Schwab’s trusted brand image.

Results Snapshot:

  • Key Market Opportunity Uncovered: Our research identified a major opportunity for Schwab to provide advanced decision-making tools for crypto-curious investors, bridging the gap between the brand’s trusted reputation and the growing interest in cryptocurrency.
  • Data-Driven Insights for Executive Decision-Making: Provided Schwab leadership with comprehensive data on crypto investor mindsets, including both existing clients investing in crypto elsewhere and new investor segments. This data was crucial for evaluating market entry options.
  • Competitive Analysis for Strategic Differentiation: Offered an in-depth comparison of competitors like Fidelity and Robinhood, highlighting the risks and rewards of various entry strategies. Schwab’s competitive edge was tied to a differentiated platform focused on credible, robust decision-making tools.
  • Positioning Schwab for Strategic Growth: Our work provided a comprehensive strategy that positioned Schwab as a forward-thinking leader in the crypto space, offering a path to maintain brand trust while exploring new market opportunities.

Looking for more insights? Use this download form to access my collection of detailed case studies, showcasing impactful results across industries. Discover how I’ve tackled complex challenges with innovative solutions.

Case Study: Redesigning Job Budget Allocation for Indeed’s Enterprise Employers

How do you increase user trust in an automated system they don’t understand?

Indeed Enterprise was facing a growing problem. Employer clients were increasingly frustrated with the lack of transparency in a recently implemented automated payment allocation algorithm. The behind-the-scenes process distributed funds to job postings based on market conditions and other metrics. While well-intended to help employers maximize the value of Indeed’s advanced job market advertising model, users didn’t trust the system. Many employers bypassed the algorithm entirely, manually allocating funds per job, which led to inefficiencies, misaligned priorities, and mounting dissatisfaction.

As Senior UX Researcher, I led research for a redesign initiative to address these concerns, focusing on creating a more transparent, user-centered solution. Through in-depth user research and iterative prototyping, we developed a system that empowered employers with greater control over job postings’ budget allocation. The new solution not only increased user engagement and trust in the algorithm, but also shaped the product roadmap for future enterprise clients.

Results Snapshot:

  • Higher User Engagement: Clients who adopted the new workflow reported greater confidence and efficiency in managing job priorities and budget allocation, resulting in better overall platform satisfaction.
  • Increased Trust in Algorithm: The added transparency and control features significantly improved trust in the automated system, leading to higher adoption rates.
  • Strategic Product Influence: The success of this project directly shaped the future direction of Enterprise product strategy, influencing key roadmap decisions.
  • Model for User-Centered Design: The initiative was widely recognized internally as a benchmark for applying user-centered design principles, setting a new standard for the organization.

Looking for more insights? Use this download form to access my collection of detailed case studies, showcasing impactful results across industries. Discover how I’ve tackled complex challenges with innovative solutions.

Case Study: Transforming Merck’s Chronic Illness Research Through Design Thinking

Can design thinking methodologies uncover new opportunities for patient engagement while ensuring innovation readiness for senior teams?

In an effort to engage patients more deeply with the Merck brand and explore new digital opportunities, I led a design thinking initiative focused on chronic illness research. The project aimed to mitigate the risk of Merck’s pharmaceuticals losing exclusivity to generics by identifying digital solutions that would enhance patient engagement. This initiative was also part of a broader digital transformation to build digital acumen across the organization. The challenge was twofold: develop patient-centered digital solutions while guiding senior team members to embrace new methodologies and strategies.

As a design thinking coach, I introduced the team to the Design Thinking model, guiding senior product and marketing leaders through iterative research processes. Together, we uncovered actionable insights into patient behaviors and attitudes, enabling Merck to explore new health-tech solutions that aligned with patient expectations and extended the value of their pharmaceutical offerings.

Results Snapshot:

  • Identified Health-Tech Opportunities: The research uncovered areas where Merck could introduce digital health solutions that aligned with the needs of chronic illness patients and physicians, extending their engagement beyond traditional pharmaceuticals.
  • Leadership Empowerment with New Methodologies: Senior market researchers and product leaders gained hands-on experience with design thinking and new digital research methods, which helped them adopt more innovative approaches across other initiatives.
  • Paved the Way for Future Digital Initiatives: The project laid the groundwork for future patient engagement strategies, positioning Merck to explore further digital transformations in the healthcare space.

Looking for more insights? Use this download form to access my collection of detailed case studies, showcasing impactful results across industries. Discover how I’ve tackled complex challenges with innovative solutions.

Case Study: Enhancing Wawa’s Self Serve Kiosk for New Dinner Options

How can Wawa introduce new dinner options while ensuring an intuitive, high-quality ordering experience for customers in both established and new markets?

As Wawa expanded into new markets and introduced a dinner menu, the company faced unique challenges: maintaining their reputation for high-quality, made-to-order food, and overcoming the negative connotations associated with traditional gas station fare. Additionally, Wawa needed to enhance the functionality of their self-serve kiosks and mobile app to support these new dinner options, while making the ordering experience smoother and more intuitive for users.

I was engaged as a UX Research Consultant, drawing on my expertise in customer research and menu taxonomy to help Wawa tackle these issues. Our goal was to refine the kiosk and mobile app experience, using qualitative and quantitative methods to gather insights into customer behavior, and ultimately improve both satisfaction and order completion rates.

Results Snapshot

  • Optimized Menu Taxonomy: Streamlined the ordering process by revising the dinner menu structure and improving overall navigation.
  • Improved User Satisfaction: Real user feedback drove enhancements that boosted customer satisfaction with the kiosk and mobile app experience.
  • Actionable Insights: Delivered key findings on user preferences, helping Wawa introduce new offerings while maintaining brand consistency across markets.

Looking for more insights? Use this download form to access my collection of detailed case studies, showcasing impactful results across industries. Discover how I’ve tackled complex challenges with innovative solutions.