The Third Lens

November 30, 2022 was transformational for everyone.

By December 1, the 5-year AI roadmaps collecting dust in slide decks were suddenly obsolete. Executives who had politely tabled transformation initiatives were now sending 2 a.m. mass emails to their consultants and internal tech teams with messages like:

URGENT: Need to download on ChatGPT. EA CC’d. Cell below.

And it wasn’t just the CEOs. Every VP, Director, and department head was humbled overnight. Individual developers and engineers who had quietly experimented with AI behind the scenes for years, were now being pulled into boardrooms they weren’t even privy to. AI became a litmus test for relevance. 

Some were driven by curiosity. Many more by fear. One CEO confided that he sits on a board where peers constantly boast about their AI momentum. Developers began reaching out from personal email accounts, worried their jobs might disappear or be completely redefined.

What followed was predictable: FOMO-fueled spending. Reactive pilots that triggered the inevitable governance hammer. Another reorg. Newly hired ‘shiny object’ leaders brought in to bandaid legacy challenges.

And yet, this wasn’t just a moment. This is a movement. The ripple effects of that single release are still unfolding today. Because what AI has really exposed isn’t a technology gap...It’s a strategy gap. The lenses through which organizations view uncertainty and ambition in their business are paramount. It has become abundantly clear that most organizations approach their biggest challenges through two limiting lenses: the surface-level diagnosis and the path of least resistance. 

Lens #1: The Surface-Level Diagnosis

Through this lens, the symptoms are often mistaken for the source. The problem is framed in familiar, palatable terms. Bandaiding is a highly-sought-after skill in these environments. Think: whack-a-mole. Complexity is downplayed in favor of premature certainty. If that doesn’t sound familiar, perhaps one of these will ring a bell: "We just need a dashboard." "The issue is talent." "Let’s stand up a CoE." 

IBM’s Watson for Oncology is one of the best examples of this lens backfiring entirely. In the mid-2010s, IBM heavily marketed Watson for Oncology as a groundbreaking AI system that could assist doctors in diagnosing and recommending cancer treatments. The ambition was bold and oversimplified: use AI to revolutionize healthcare by ingesting troves of data and recommending best-practice treatment options.

The implementation revealed a key failure of the surface-level lens: Instead of deeply embedding clinical expertise into the system or addressing the complexities of patient data variability, IBM built and sold a vision based on narrow training data and idealized use cases that were largely sourced from a single U.S. hospital - Memorial Sloan Kettering Cancer Center. 

You can guess what happened next: Watson for Oncology frequently recommended unsafe or incorrect treatments, struggled to scale, and failed to live up to its marketed capabilities. Physicians were frustrated. The financial drawbacks were monumental. The liability was deemed great enough that IBM discontinued this Watson use case entirely. 

Lens #2: The Path of Least Resistance

Here, the goal is momentum, not traction. Solutions are selected based on what’s politically feasible, not what’s transformational. The recommendations stemming from this lens are often conflict avoidant and informed by layers of bureaucracy. "Let’s copy what our competitor did and tweak it." "We’ll reorganize first, then tackle the tech." "Maybe a 6-week POC will show some quick wins.” 

In early 2023, Samsung made headlines when several employees from its semiconductor division inadvertently leaked sensitive internal data to ChatGPT. After OpenAI’s tool was made available internally to “boost productivity,” employees reportedly pasted confidential source code and internal meeting notes into the tool, seeking help with debugging or summarization.

As we all hopefully know today, ChatGPT retains input data for training and context purposes unless specific privacy settings or enterprise agreements are in place.

Samsung’s decision to allow broad internal use of ChatGPT, without guardrails, policies, or an enterprise-grade agreement, was a reactive move. Within weeks, the company banned employee use of generative AI altogether, citing security risks.

It exposed a lack of data governance and employee enablement. It triggered reputational concerns about data protection and IP exposure. It became a cautionary tale in the media for how not to roll out GenAI in the enterprise. This wasn’t a tech failure—it was a leadership oversight, rooted in speed over strategy.

As we witnessed one catastrophe after another, one might pause and ask, “how do organizations not recognize their blindspots? Isn’t it obvious?” Perhaps what these incidents call for is empathy. The legacy governance leader that’s been in their organization for decades did not give up all of their responsibilities for this one - it’s just one more checkpoint that they have to assess for. A significant one, at that. The developer that input their code to find their error wasn’t trying to compromise the company’s data. It’s safe to say that most of the errors are truly results of good intentions and poor governance.

Lens #3: The Outsider’s Perspective

These two lenses feel safe and familiar, but they rarely lead to enduring impact. We offer the third and would like to illustrate this lens with one final story:

One Fortune 100 organization that leverages a self-service compendium for its internal customers was quick to deploy an internal instance of GPT, complete with (not-so-seamless) BI capabilities and recommendations. Overnight, thousands of employees were given access accompanied by training programs that saw minimal adoption. This organization has iterated on their instance for nearly two years with one recurring challenge: varying versions of a prompt that calls for the same visual, often surfaces contradicting data, insights, and recommendations. 

“Hi xx,

Can I get your thoughts here? My team has been working incredibly hard to establish itself as a reliable source for a subset of our executive leadership. Our tools are being questioned on their data integrity because another group has developed a competing tool for these leaders. We’re using sophisticated integrations to ensure that our data sources are accurate. Do you have anyone else going through this?

Thanks -
xx”

While the process for implementing our recommendations for this client were hardly linear, there is a great theme across all organizations that come to us with this common scenario: the valid hype around AI has resulted in organizations placing foundational progress on the back-burner, and it’s becoming a non-negotiable. A strong foundation of and plan for data quality, privacy, governance, literacy are not optional. And once you’ve reached this scale…deploying thousands (if not tens of thousands) of tools enterprise-wide…it’s not as simple as a data dictionary or glossary. The patience and attention this requires doesn’t exist, and neither does perfect data. 

Finding the right strategy is only half the battle. Aligning distributed enterprises behind it is a cultural challenge - one shaped by years of working relationships, habits, and invisible power structures that don’t shift overnight. We are not suggesting that a new face is the magic wand that relieves all of this pain felt by most organizations…but that the removal of tunnel vision can expose a new view of these challenges, inspiring enterprises to pivot, and accelerate.

The Meman Method is a disciplined lens grounded in truth, not convenience. We challenge assumptions. We hold up a mirror. We surface hidden risks and name the real problem early, before it names you.

This isn’t a methodology. It’s a mindset and a way of seeing. One that’s outcome-driven, context-aware, and unafraid of complexity. Because when you’re leading through ambiguity, you don’t need more noise - you need clarity. Once you have that, you see your opportunity and get to choose your hard.

We’re not here to help you keep up. We’re here to help you lead.

Mina Meman

Mina is a Kurdish writer based in Portland, OR. Her writing is inspired by her father, Saad, and mother, Kosrat. Mina writes to inspire curiosity and spark motivation in her readers.

https://www.minameman.com