Put Data First: Why AI Success Starts With Data Foundations
Most AI projects fail not because of models, but because of messy, misunderstood data
In boardrooms today, few topics generate as much excitement - and as much anxiety - as artificial intelligence. Executives know that AI holds the promise of reshaping industries, from automating routine tasks to creating entirely new business models. Yet, behind the headlines and lofty projections lies a sobering reality: most AI initiatives are failing to deliver. Studies from MIT, IBM, and others consistently show that between 70% and 95% of AI projects never achieve their intended outcomes.
The root cause isn’t the sophistication of algorithms or the pace of innovation in large language models. It’s something far more fundamental: the quality and governance of the data that fuels these systems. Or as my guest Chris LaCour put it during our conversation:
“To be AI first, you must put data first.”
Chris is the organizer of the upcoming Put Data First conference (more on that below), and he’s spent the past year talking directly with chief data officers, chief AI officers, and digital transformation leaders. The recurring theme he hears is striking in its simplicity: organizations don’t actually know where their data is, what condition it’s in, or how to harness it effectively. In other words, the promise of AI is colliding with the messy reality of enterprise data.
This episode of Facing Disruption gave us the chance to unpack that reality - why so many leaders are struggling, what’s at stake if they don’t get it right, and how a different approach to data could unlock real AI value.
AI in Its Toddler Stage
Artificial intelligence may dominate headlines, but its maturity as a business tool is far less advanced than many assume. Chris LaCour described today’s AI as a toddler - eager, fast-moving, and sometimes prone to falling over.
“We’ve been feeding it everything we could feed it, and now we’re dealing with the repercussions,” he explained.
The image is apt: enterprises have rushed to pour data into models, but like a child experimenting without boundaries, the outputs can be erratic and difficult to predict.
The deeper problem is not AI’s raw potential, but the pace at which organizations have tried to scale it. The market has been conditioned to believe that adoption is an arms race - if you don’t move quickly, you’ll be left behind. Boards and investors are pressing for aggressive timelines, often before internal teams have a clear understanding of their data ecosystem. The result is an uncomfortable pattern: pilots that look promising, but fail to translate into sustainable, enterprise-wide outcomes.
Unpredictability compounds the issue. Large language models, the engines of today’s generative AI wave, are probabilistic systems. They don’t “know” truth; they predict likely sequences of words based on training data. That means hallucinations, inconsistency, and “catastrophic forgetfulness” are not bugs to be ironed out, but inherent risks that organizations must plan for. Yet too many companies deploy these systems as if they were deterministic.
For executives, this toddler stage raises a leadership challenge. Do you push ahead, hoping to capture early-mover advantage, or do you slow down and strengthen your footing? The answer lies in balance. Ignoring AI entirely is not an option - competitors will find efficiencies and innovations that could leave laggards behind. But treating AI as a mature, plug-and-play solution is equally risky. The most resilient organizations are those willing to embrace AI’s potential while still putting guardrails in place, starting with a brutally honest assessment of their data readiness.
The Core Problem: Data Chaos and “Data Sewage”
Every executive understands the value of data in theory. In practice, most organizations are overwhelmed by it. Leaders repeatedly echo to Chris LaCour that they don’t actually know where their data resides, who controls it, or whether it can be trusted. The situation isn’t new - companies have been grappling with information sprawl since the dawn of email - but the stakes are higher now that AI depends on data as its raw material.
Structured data - the kind neatly organized in databases - is challenging enough to manage. But it represents only a fraction of what enterprises actually generate. Up to 80% of enterprise data is unstructured, buried in PDFs, videos, chat logs, and other formats that don’t fit easily into conventional systems. This is where the term “data sewage” has begun to surface: information that exists in volume, but is so disorganized, duplicated, or misclassified that it actively hinders rather than helps.
Consider what this looks like inside a large enterprise:
Duplicate records that cause conflicting outputs when AI models attempt to generate customer insights.
Poor classification, where sensitive data is mislabeled or overlooked entirely, creating compliance risks.
Fragmented ownership, with marketing, finance, and operations each using their own tools and standards.
“A lot of companies are just wrapping their head around structured data. Unstructured, which makes up 80%, isn’t even being touched.”
This data chaos is why so many AI efforts fail to progress beyond proof-of-concept. Models are only as good as the data they ingest. When that data is messy, incomplete, or contradictory, the results are inevitably unreliable. Worse, companies often don’t discover these flaws until after they’ve invested heavily in pilots or vendor contracts.
For leaders, the message is clear: before AI can become a source of competitive advantage, data must be treated as a strategic asset. That means not only cleaning and consolidating what exists, but also creating governance processes that ensure new data doesn’t simply add to the sewage problem.
Short Tenures, Big Expectations for CDOs
If the enterprise data landscape is chaotic, the role of the Chief Data Officer (CDO) is one of the most thankless jobs in business. On paper, CDOs are tasked with turning sprawling information ecosystems into coherent, trustworthy assets that power AI and analytics. In reality, many enter their roles only to find the situation far worse than advertised.
Chris LaCour shared what he’s repeatedly heard from data leaders: new CDOs are often given a two-year window to “get the company’s data together.” If they succeed, they are hailed as transformation leaders. If they don’t, they are quickly replaced.
“They have a couple years to get the company’s data together. If they don’t, they’re gone.”
The problem is that the expectations rarely match the reality on the ground. Executives may tell themselves their organization’s data quality is strong, but once a CDO begins peeling back the layers, the truth emerges: duplicates, inconsistent metadata, and incomplete governance frameworks are everywhere. What looked like a sprint to AI readiness becomes a long-distance marathon.
This revolving-door dynamic has consequences. Short CDO tenures mean institutional memory is lost just as progress begins to take shape. Teams grow cynical after multiple “data transformations” that never seem to stick. Meanwhile, the pressure to show quick wins drives some leaders toward cosmetic fixes - deploying flashy tools without solving underlying classification or governance problems.
For boards and CEOs, this should be a wake-up call. Treating the CDO role as a short-term experiment undermines the very foundations AI initiatives depend on. Building reliable, scalable data practices is not a 24-month project; it’s an organizational capability that requires sustained investment, patience, and cross-functional commitment. The companies that get this right won’t be the ones that churn through CDOs - they’ll be the ones that empower them with time, authority, and resources to address systemic issues.
Stakeholders and the Knowledge Gap
Artificial intelligence doesn’t sit neatly in one department. Its impact cuts across every function, which means no single executive can own the strategy outright. This makes AI a uniquely complex leadership challenge: the CFO cares about cost and ROI, the CISO worries about security vulnerabilities, the Chief Risk Officer frames it as a compliance problem, and the General Counsel sees legal exposure. Meanwhile, business line leaders want speed, efficiency, and new customer value.
“AI means different things to different people in different roles.”
This fragmentation is both natural and dangerous. Without deliberate coordination, organizations risk talking past one another - chasing vendor promises in one area while underestimating risks in another.
The knowledge gap widens the problem. Few leaders outside of data or technology roles fully understand how AI systems work, or what their limitations are. For example, large language models are “token guessers,” as Chris described them. They don’t represent truth; they generate predictions based on probabilities. To a compliance officer or lawyer, that distinction is critical - yet many companies deploy these systems without fully briefing stakeholders on how outputs should be interpreted.
The result is often a mismatch of expectations: business leaders assume AI is a reliable decision-making engine, while technical teams know it is a probabilistic system prone to hallucinations. Bridging this gap requires more than technical training; it requires creating shared literacy at the leadership level.
Forward-looking organizations are addressing this by forming cross-functional AI committees where CISOs, CFOs, legal officers, and data leaders sit together to align on priorities. These groups not only surface blind spots early, but also accelerate adoption by ensuring that AI is framed as a business transformation, not just a technology project.
For executives, the key is recognizing that AI is not a siloed initiative - it is a collective endeavor. Closing the knowledge gap is just as important as closing the data gap.
The AI Bubble and Sustainability Concerns
The AI boom has triggered a flood of investment. Trillions of dollars are projected for new data centers, high-performance chips, and cloud infrastructure. Every major software vendor now advertises some “AI-powered” capability. On the surface, it looks like unstoppable momentum. But underneath, questions about sustainability are growing louder.
“You cannot continue to spend more money than you’re bringing in. It’s not sustainable.”
Despite the hype, leading AI companies like OpenAI and Anthropic are not yet profitable. Their revenue models remain unproven, even as their infrastructure costs soar. Scaling large language models requires enormous compute power, and projections suggest that achieving artificial general intelligence (AGI) could require trillions in additional investment.
The environmental and energy implications add another layer of concern. Data centers already consume around 2% of global electricity, and AI workloads are set to accelerate that demand dramatically. Some in the industry are openly discussing nuclear reactors as a future enabler of AI infrastructure - a signal of just how energy-intensive the current approach is. For executives, that means AI adoption is not only a financial risk but also a reputational one, as sustainability metrics become core to investor and customer expectations.
There is also the market psychology to consider. Just as the dot-com bubble was defined by overinvestment in unproven business models, today’s AI surge is marked by inflated expectations. Every new funding round or press release seems to imply that AI will solve every business problem. When results inevitably fall short, disillusionment follows. Gartner’s hype cycle describes this pattern as the “trough of disillusionment” - and AI is heading there fast.
This doesn’t mean AI is a fad. As I mentioned during the conversation, “AI is an innovation engine, but it’s not invention.” It is extraordinarily powerful at recombining information, but it cannot create fundamentally new knowledge. For leaders, this means adjusting expectations: AI can drive efficiency and enable faster iteration, but it is not a silver bullet.
The bubble question isn’t whether AI has long-term value - it does. The real question is which companies will emerge from this phase with sustainable models, and which will collapse under the weight of overinvestment and unmet promises.
Practical Steps for Organizations
For all the uncertainty around AI, one truth is clear: success will not come from chasing the latest tool, but from building a resilient foundation. Chris LaCour emphasized this repeatedly, reminding us that “AI governance is becoming a top priority.” Organizations that rush to deploy without addressing fundamentals will find themselves stuck in endless pilots, or worse - facing regulatory and reputational fallout.
So what does a practical path forward look like?
1. Form cross-functional AI committees
AI is not an IT project; it is an enterprise transformation. Companies seeing traction are bringing together data, legal, risk, compliance, security, and business leaders in structured forums. These committees ensure AI is evaluated not only for technical feasibility but also for financial impact, ethical use, and regulatory exposure.
2. Start small with high-ROI use cases
Instead of aiming for sweeping transformation, focus on narrow projects where data quality is manageable and ROI is measurable. Customer support automation, invoice processing, and knowledge search are common entry points.
3. Tie every AI investment to clear ROI metrics
Boards and CFOs want to know: what will this save, replace, or enable? Map investments directly to CapEx and OpEx implications. This disciplines teams to pursue impact-driven initiatives rather than technology experiments.
4. Invest in governance and risk management
AI introduces new risk profiles, from hallucinated outputs to privacy breaches. Treat AI governance like cybersecurity: a continuous program of monitoring, controls, and education. Include policies for data classification, model usage, and accountability.
5. Build organizational literacy
Executives don’t need to become data scientists, but they do need to understand what AI is - and isn’t. Demystifying probabilistic models, explaining limitations like hallucinations, and clarifying where human oversight is required helps align expectations.
Taken together, these steps create an environment where AI can deliver real, compounding value instead of hype-driven frustration. They also prepare organizations for the next wave of regulatory scrutiny, which is certain to increase as governments grapple with the implications of AI in critical industries.
Building Community Around Data and AI
Note: While we aren’t sponsored by PutDataFirst, AJ Bubb will be facilitating roundtable discussions during the conference
If data quality and governance are the bottlenecks, and if success requires cross-functional alignment, then one truth becomes clear: leaders cannot solve these challenges in isolation. The problems are too broad, too complex, and too deeply interconnected across technology, legal, risk, and finance. What’s needed is not another vendor pitch or glossy panel - it’s structured dialogue among practitioners facing the same realities.
That was the motivation behind Chris LaCour’s Put Data First conference. Instead of centering on sponsor-driven keynotes, the event is designed around practitioner-led roundtables. Each participant has the chance to join multiple sessions, not as a passive listener, but as a contributor. The goal is to create a forum where executives can compare notes on the messy realities of AI - what’s working, what isn’t, and what lessons can be carried back to their organizations.
“It’s about deeper conversations, not vendor agendas.”
The October 27–29, 2025 gathering in Las Vegas reflects this ethos. Leaders from across industries - finance, healthcare, defense, energy - will step away from day-to-day pressures to focus on issues that can’t be solved in silos:
How to build durable AI governance structures
How legal, compliance, and risk functions intersect with innovation
How to translate data strategy into tangible ROI
How to prepare boards and investors for realistic outcomes
This intentional format addresses a gap in the current ecosystem: “There’s so much content out there already. What people want is intentional human connection and the chance to dive deeper into problems with peers.”
The lesson here isn’t only about one event. It’s about recognizing that AI maturity will come from communities of practice - not from isolated pilot projects or vendor promises. Executives who invest in building those connections will be better positioned to lead their organizations through the hype cycle and toward sustainable impact.
The Path Forward
The story of AI so far has been defined by bold ambition colliding with messy reality. Models have advanced rapidly, but most organizations remain bogged down by data chaos, governance gaps, and misaligned expectations. It’s little wonder that up to 95% of AI initiatives fail to meet their goals.
Yet the lesson isn’t to slow down or retreat. It’s to redirect focus. As Chris LaCour reminded us, “To be AI first, you must put data first.” That means confronting uncomfortable truths about the state of your data, investing in classification and governance, and giving your Chief Data Officer the mandate and support to succeed. It means equipping executives across the C-suite with enough shared literacy to close the knowledge gap. And it means resisting the temptation to treat AI as magic when it is, in fact, a tool - powerful, but only as reliable as the foundations it rests on.
Leaders who embrace this mindset will be better prepared for what comes next. They will avoid the trap of overinvestment in unsustainable initiatives, and instead build a portfolio of use cases that compound value over time. They will cultivate resilience by grounding AI in practical ROI, not hype. And they will strengthen their organizations by engaging in the kind of intentional, practitioner-driven dialogue that events like Put Data First are pioneering.
The path forward is not about chasing every new algorithm or racing to the biggest compute cluster. It’s about doing the unglamorous work of putting your data house in order, aligning your leadership team, and building communities of practice that can sustain progress. Those who take that path won’t just survive the trough of disillusionment - they’ll emerge from it stronger, with AI that truly serves their business and their customers.
I’d like to thank Chris LaCour for joining me on Facing Disruption and sharing his perspective on why putting data first is the foundation for meaningful AI adoption.
If you’re interested in continuing this conversation, the Put Data First Conference takes place October 27–29, 2025 at Planet Hollywood in Las Vegas. The event is built around practitioner-led roundtables that bring executives together to tackle the toughest questions around AI, data governance, and organizational readiness.
You can learn more and explore the agenda at www.putdatafirst.com.