Unlocking AI’s Potential: Overcoming Barriers To Adoption, Part 1 - Data
Unlock AI success by mastering data quality, breaking down silos, and implementing ethical governance for a solid foundation in your transformation journey
A condensed version of this article was originally published on Forbes
Artificial Intelligence (AI) holds transformative potential for businesses, yet the staggering statistic that over 80% of AI projects fail underscores the challenges organizations face in realizing this promise. AI project failure rates are nearly double those of traditional IT projects, often due to misaligned expectations, poor data quality, and inadequate infrastructure.
As an Innovation Strategist, I've seen firsthand the transformative potential of AI in business. However, I've also witnessed the challenges many organizations face when implementing these solutions. In this first installment of our series, we'll focus on what I consider the foundation of successful AI initiatives: data.
The AI Adoption Landscape
The AI adoption landscape is rapidly evolving. According to McKinsey's 2023 global survey, 55% of companies report AI adoption in at least one function, up from 50% in 2022. This growth is encouraging, but it also means that nearly half of businesses are still on the sidelines.
More recent data from 2025 paints an even more optimistic picture. A survey reveals that 77% of companies are either using or exploring the use of AI in their businesses, and 83% of companies claim that AI is a top priority in their business plans. This significant increase in adoption and prioritization over the past two years demonstrates the growing recognition of AI's importance in the business world.
However, adoption doesn't always translate to success. Only 48% of digital initiatives meet or exceed business outcome targets, underscoring the urgent need for a more strategic approach to AI implementation. This statistic highlights the gap between AI adoption and realizing tangible business value.
Data: The Foundation and the Stumbling Block
Data Quality: The Achilles' Heel of AI
Poor or inconsistent data leads to unreliable AI models. This manifests in various forms including unstructured or poorly organized data, lack of appropriate meta-tagging and definition, and low accuracy percentages. As one CTO I worked with put it bluntly: "We have oceans of data, but it's more like a swamp than a clear lake."
Data Silos: The Political Quagmire
Information trapped in different departments hinders comprehensive AI initiatives. Interestingly, data silos aren't always a technology problem. Often, it's a political challenge where business lines view their data as proprietary and refuse to share access. This territorial approach to data ownership creates invisible walls within organizations, preventing the holistic view needed for truly transformative AI applications.
Data Privacy and Ethical Considerations: The Compliance Conundrum
Concerns about protection and compliance slow adoption. As AI becomes increasingly adept at combining various data sources, we're seeing situations where data scrubbed of personally identifiable information (PII) can be combined with other datasets, potentially allowing AI to back into identifying original sources or individuals. This creates a complex balancing act between leveraging data for insights and maintaining privacy and ethical standards.
Strategies to Address Data Challenges
To tackle these issues head-on, I recommend implementing robust data governance practices that establish clear ownership, quality standards, and usage policies. This includes investing in data cleaning and integration tools that can transform your data swamp into a clear lake of valuable information. Establishing clear data collection and quality assurance processes ensures that your AI models have reliable inputs from the start.
Remember, high-quality data is a primary source of competitive advantage in the AI world. Organizations that treat their data as a strategic asset gain a significant edge over competitors still struggling with fragmented, low-quality information.
Case Study: Federated Learning in Life Sciences
I've worked extensively with research teams in the life sciences space, specifically around developing digital biomarkers. A recurring challenge was the intentional withholding of specific datasets between research groups, hindering the development of new algorithms and digital biomarkers.
To combat this, we explored Federated Learning. This innovative approach allows organizations to maintain control and protect their data while providing external teams the ability to run models against the data and receive results without compromising data privacy. Federated Learning has shown promising results in healthcare, with studies finding that federated learning models among 10 institutions can have as much as 99% of the model quality compared to similar research with centralized data sharing.
Conclusion: Building Your Data Foundation
Data is the foundation upon which successful AI initiatives are built. Without high-quality, accessible, and ethically managed data, even the most sophisticated AI models will falter. As you embark on your AI journey, prioritize addressing these data challenges first.
In the next article in this series, we'll explore the human element of AI adoption – how to ensure your team embraces these new technologies and how to manage the change process effectively. Remember that successful AI implementation isn't just about the technology; it's about creating an ecosystem where data flows freely, securely, and accurately throughout your organization.
Stay tuned for Part 2 of our series, where we'll dive into the human element of AI adoption and change management strategies that ensure your AI initiatives succeed with your most important asset – your people.