The Real ROI of AI: Why Boring Automation Beats Disruptive Dreams
Innovation's dirty secret: the highest returns come from automating tedious work, not chasing moon shots.
The innovation industry has conditioned executives to dream big. Launch revolutionary products. Disrupt entire markets. Create the next unicorn. Yet while organizations chase these transformative visions, they overlook a more immediate opportunity hiding in plain sight. The most valuable application of artificial intelligence isn’t building the next breakthrough product; it’s eliminating the thousands of small, tedious tasks that drain productivity and morale across every department. These “boring” automation projects deliver measurable returns within months, not years, and they don’t require massive transformation initiatives or specialized AI expertise. The disconnect between innovation theater and operational reality has never been wider, and it’s costing companies millions in unrealized efficiency gains.
This tension between aspirational and practical innovation came into sharp focus during my recent webcast conversation with Ahmet Acar, my fellow former AWS Innovation Advisor. Ahmet has spent two decades building products at McKinsey, Google, Meta, General Electric, and AWS before relocating to Nairobi, and his perspective working across tech giants, traditional enterprises, and emerging markets cuts through the AI hype cycle to examine where organizations are actually capturing value. What emerged from our discussion is a framework for thinking about AI-driven innovation that prioritizes immediate operational improvements over speculative disruption.
Key Takeaways
Before diving into our full conversation, here are the most important insights Ahmet and I uncovered:
Stop chasing moonshots, start automating friction. The highest ROI AI applications aren’t revolutionary products - they’re eliminating tedious tasks like proposal creation, meeting notes, and data aggregation that consume hours weekly across your organization.
Data infrastructure determines everything. No AI tool delivers value if it can’t access relevant information in usable formats. Fix your data foundation before pursuing advanced applications.
Small improvements compound dramatically. Amazon and Google built dominance through thousands of incremental optimizations, not single breakthroughs. When 50 knowledge workers each save 5 hours weekly on routine tasks, that’s 250 hours of reclaimed capacity - equivalent to more than six additional full-time employees.
Technical barriers are falling fast. An eight-person agency can now compete against fifty-person firms by spending $1,000-1,500 monthly on AI tools that automate roles previously requiring specialized expertise.
Not all AI applications deserve deployment. Mental health support, therapeutic interventions, and high-stakes decisions involving human psychology or rights require clear boundaries where AI assists rather than replaces human judgment.
What You Can Do Right Now
Based on our conversation, here’s where to start regardless of your organization’s size or AI maturity:
Map your information workflows. Identify where employees spend time gathering data from multiple systems, reformatting documents, or updating templates. These activities are prime automation candidates that deliver measurable ROI within months.
Calculate the true cost of “information work.” Track how many hours your team spends weekly on status reports, meeting notes, and proposal updates. The direct costs in salaries plus opportunity costs of work not being done often reveal surprising value in automation.
Start with tools, not custom development. String together specialized tools through no-code integration platforms like Zapier rather than building custom solutions. This lets you experiment rapidly and benefit from continuous vendor improvements.
Set clear AI boundaries now. Define categories of decisions where AI should assist rather than replace human judgment. Make these explicit in policies before deployment pressure forces reactionary decisions.
Now, let me share what Ahmet and I discovered about where AI is actually creating value today - and where it’s falling short.
The Innovation Vocabulary Problem
Most organizations struggle with innovation, not because they lack ideas, but because they misunderstand what innovation actually means. Traditional companies typically frame innovation as something dramatically new within their industry, a product launch worthy of press releases and executive presentations. This understanding creates an immediate barrier: innovation becomes synonymous with high-stakes projects that require significant resources and carry substantial risk.
Tech companies like Google and Amazon approach innovation differently. Rather than focusing on revolutionary breakthroughs, they prioritize continuous incremental improvements. These companies constantly tweak existing systems, transplant successful approaches from one domain to another, and optimize customer experiences through hundreds of small modifications. The cumulative effect of these improvements often exceeds the impact of any single “disruptive” project.
Recent Federal Reserve research indicates that 28% of workers now use generative AI at work to some degree, suggesting that practical adoption is outpacing the rhetoric around transformation. Yet this adoption remains concentrated in specific, narrow use cases rather than the expansive applications promised by AI vendors. While 83% of companies report that using AI in their business strategies is a top priority, AI is expected to improve employee productivity by 40%, a figure that masks significant variation in actual implementation success.
The distinction between incremental and disruptive innovation matters because it determines resource allocation and risk tolerance. Disruptive innovation describes a process in which new entrants challenge incumbent firms, often despite inferior resources, while radical innovations stem from the creation of new knowledge and the commercialization of completely novel ideas or products. Most organizations need both types, but the balance has shifted too far toward pursuing transformation while neglecting optimization.
Entrepreneurs operating in competitive markets don’t have the luxury of debating innovation frameworks. They focus instead on gaining a competitive advantage, improving operations, optimizing sales funnels, or creating better content. Whether their approach qualifies as “innovative” by academic standards matters far less than whether it helps them win customers and generate revenue. This pragmatic mindset explains why smaller, nimble organizations often extract more value from AI tools than their larger, better-resourced competitors.
Where AI Actually Creates Value Today
The current wave of AI applications centers on hyper-personalization of customer experiences. Companies are moving beyond basic demographic targeting to create individualized experiences based on behavioral patterns, preferences, and contextual signals. This isn’t entirely new; platforms like Facebook have enabled microtargeting for years, but AI tools now make sophisticated personalization accessible to companies without massive data science teams.
Stitch Fix provides an instructive case study. Founded in 2011, the fashion recommendation service spent years refining its approach to curating individualized style selections. The company experienced significant growth during 2021-2022, not coincidentally timed with the soft launch of large language models that enabled new capabilities in understanding customer preferences and generating recommendations. What began as a manual process supported by basic algorithms evolved into an AI-powered system that could analyze style preferences, body measurements, lifestyle needs, and trend data to create personalized selections at scale.
This pattern will likely extend across retail and beyond. Amazon’s recommendation engine, already sophisticated, will become significantly more contextual and conversational. Google search results will adapt not just to query terms but to user context, search history, and inferred intent, assuming they can address the well-publicized challenges with AI-generated misinformation that include suggesting users add glue to pizza cheese or eat rocks.
Yet personalization represents only one application area. AI is compressing entire phases of the traditional innovation lifecycle. The distinction between prototyping and building working code is collapsing. When someone can frame a concept and generate functional software within hours without coding expertise, the rationale for creating wireframes and detailed specifications diminishes. Similarly, designing A/B tests increasingly involves articulating assumptions to test rather than manually configuring experimental parameters. Tools like Figma, Webflow, and Canva have begun incorporating these capabilities, allowing users to define desired outcomes and let AI handle implementation details.
This compression doesn’t eliminate the need for human judgment; it shifts where that judgment gets applied. Rather than spending time on mechanical tasks like writing boilerplate code or creating test variants, teams can focus on strategic questions: What should we build? What assumptions need validation? How do we interpret results? The tools handle execution; humans provide direction and critical thinking.
Rethinking Process and Facilitation
During my conversation with Ahmet, we explored how innovation methodologies themselves are becoming candidates for AI augmentation. The working backwards sessions and design thinking workshops that we facilitated as innovation advisors follow predictable patterns. A facilitator guides participants through structured questions, captures insights, helps synthesize themes, and documents decisions. Much of this process involves applying a methodology consistently rather than drawing on deep expertise accumulated over decades.
This observation doesn’t diminish the value of experienced facilitators like Ahmet and myself; it suggests our expertise becomes more valuable when freed from mechanical process execution. An AI system could guide teams through structured innovation exercises, ask probing questions, capture responses, and identify patterns. The facilitator’s role would shift toward higher-order concerns: reading team dynamics, navigating political tensions, knowing when to deviate from the script, and providing strategic context that transcends any particular methodology.
Gaming companies are pioneering this approach. Wizards of the Coast is developing virtual game masters for its Beyond Dungeons & Dragons platform. These AI systems prompt players through gameplay, respond to actions, and maintain narrative coherence. While not replacing human game masters for complex campaigns, they make the game accessible to groups who lack an experienced facilitator. Similar applications are emerging in mental health support and coaching, though these domains raise significant concerns about the limitations of current technology, a topic I’ll examine more closely.
The concept extends beyond formal facilitation. Many knowledge work processes involve gathering information, asking clarifying questions, summarizing key points, and identifying action items. These activities consume enormous time yet require relatively little specialized expertise. Meeting notes, for instance, typically involve one participant splitting attention between discussion and documentation, resulting in incomplete records and reduced participation. AI note-taking tools like Otter, Fathom, and similar platforms now handle this automatically, generating transcripts, summaries, action items, and even drafting follow-up emails.
This shift from “artificial intelligence” to “augmented intelligence” better captures the current state of capability. These tools excel at structured, repetitive tasks but struggle with nuanced judgment calls. They can identify patterns in data but not determine which patterns matter. They can generate options but not evaluate trade-offs that require deep domain expertise or consideration of political realities. Positioning them as augmentation rather than replacement sets appropriate expectations and focuses deployment on high-value use cases.
The Democratization of Technical Capabilities
One of the most significant impacts of AI tools involves making sophisticated technical work accessible to non-specialists. Ahmet shared a compelling example from a Singapore-based agency founder he recently spoke with. The eight-person team this founder leads competes against much larger professional services firms by leveraging AI tools throughout their operations.
The meeting itself illustrated the approach. Rather than having an account manager attend to take notes, the founder used Otter to record the conversation automatically. The tool generated a transcript, summarized key points, identified agreed actions, and drafted follow-up emails all without human intervention. This automation eliminates one full-time role from typical agency structures.
Their operations backend revealed even more extensive automation. The team connected Otter to Pipedrive (their CRM) through Zapier, creating a workflow where meeting summaries automatically populate deal records. Additional integrations link various AI platforms they use for different client work, automating routine transitions between tools and eliminating manual data entry. These automations don’t involve complex programming; they’re configured through no-code integration platforms by people who understand business processes but lack technical backgrounds.
The cost structure proves particularly interesting. The agency spends approximately $1,000-1,500 monthly on their AI tool stack significantly less than one employee’s salary. This investment delivers capabilities that would otherwise require multiple specialized roles: project management, client communications, content creation, and operational coordination. The result: an eight-person team competing effectively against firms with fifty or more employees.
This pattern extends beyond agencies. Manufacturing operations that once required specialized CAD expertise can now use generative design tools. Users specify requirements, constraints, and performance goals; the system generates optimized designs automatically. Similarly, 3D printing workflows that previously demanded technical knowledge of modeling software, print settings, and material properties can be automated through AI tools that handle conversion and optimization.
The implications reach beyond cost savings. When technical barriers fall, organizations can experiment more rapidly. Teams can test ideas that would previously require specialized resources, shortening the path from concept to validation. This expanded capability doesn’t eliminate the need for experts; it allows experts to focus on complex problems rather than routine technical execution.
The Data Foundation Problem
A persistent gap exists between AI vendors’ promises and enterprises’ reality. Professional services firms that sell AI transformation to clients often struggle to implement similar capabilities internally. This isn’t hypocrisy; it reflects the genuine difficulty of making these tools work within complex organizational contexts.
The fundamental challenge involves data. AI systems require structured, accessible information to generate useful outputs. Most enterprises have accumulated decades of unstructured data scattered across incompatible systems. A professional services firm might have created thousands of proposals for the same industry, but if those proposals exist as Word documents stored in individual SharePoint sites, personal drives, and email archives, that knowledge remains effectively inaccessible.
Data silos aren’t just a technology problem; they’re reinforced by organizational dynamics. At some consultancies, having the right slide deck or proposal template becomes social currency. People who curate valuable materials gain influence by selectively sharing them. Creating centralized, searchable knowledge repositories eliminates this dynamic, threatening informal power structures. Employees lack the incentive to contribute their best work to shared systems when doing so diminishes their individual value.
These challenges explain why even sophisticated technology companies struggle with basic knowledge management. Ahmet shared an example of one manufacturing client that needed assembly instructions digitized to support an augmented reality work instruction system. The “official” source turned out to be a binder maintained by a long-time employee who had photographed equipment and handwritten notes over fifteen years. No digital systems, no structured formats, no version control. Before implementing any AI-powered solution, the organization needed to complete a fundamental digitization project costing hundreds of thousands of dollars.
This pattern repeats across industries. Healthcare organizations want to deploy AI diagnostic tools but lack standardized electronic health records. Financial institutions seek to automate compliance but struggle with unstructured documents and inconsistent data formats. Manufacturers aim to optimize operations, but can’t aggregate sensor data from legacy equipment. In each case, the sexy AI application requires unsexy data infrastructure work first.
The disconnect between aspiration and reality creates a market opening for smaller, nimbler organizations. A lean team that commits to maintaining clean, structured data from the start can deploy AI tools that larger competitors with legacy systems cannot. This advantage compounds over time as the data foundation enables additional capabilities while incumbents remain mired in transformation programs.
Compounding Small Improvements
Large technology companies built their dominance through relentless incremental improvement rather than revolutionary breakthroughs. Amazon didn’t succeed because of any single innovation it succeeded by optimizing thousands of small details in logistics, inventory management, recommendation algorithms, and customer service. Google’s search quality advantage came from continuous refinement of ranking algorithms, not one-time breakthrough discoveries.
This approach conflicts with how traditional enterprises think about innovation budgets. Most organizations fund discrete “innovation projects” with defined scopes, timelines, and success metrics. This structure works for building new products but fails for the accumulation of small improvements that drive operational excellence. No executive wants to champion a project titled “Reduce Proposal Creation Time by 15%” even though dozens of such improvements would transform productivity.
The most valuable AI applications often involve these boring, unglamorous tasks. Consider professional services proposal development. Someone opens a previous proposal document, copies relevant sections, manually updates client information, searches for new case studies, reformats everything, and routes drafts via email to multiple reviewers. This process might occupy skilled professionals for days on each proposal, yet it consists largely of template manipulation and information retrieval precisely what AI tools handle well.
Similarly, many enterprise workflows involve gathering information from multiple systems, summarizing it for decision-makers, and distributing updates to stakeholders. An account manager might spend hours each week pulling customer data from Salesforce, usage metrics from analytics platforms, support tickets from Zendesk, and financial data from ERP systems to prepare quarterly business reviews. AI tools can aggregate this information automatically, generate summary reports, identify notable trends, and draft executive updates for tasks that consume time but create little value through human execution.
The cumulative impact of automating these activities significantly exceeds headline-grabbing AI projects. When fifty knowledge workers each save five hours weekly on routine tasks, that’s 250 hours of reclaimed capacity per week equivalent to more than six additional full-time employees. With AI expected to improve employee productivity by 40%, organizations capturing even a fraction of that potential through operational automation will create substantial competitive advantage.
Yet most organizations ignore these opportunities while pursuing transformative AI initiatives. They’ll fund a three-year program to develop AI-powered products while leaving employees to manually update spreadsheets and chase information across systems. The transformation program might eventually deliver breakthrough capabilities, but the organization hemorrhages productivity daily on automatable work.
The Bias Challenge
AI systems inherit biases from their training data, creating new risks even as they solve existing problems. Early demonstrations revealed ChatGPT’s training origins when users crafted prompts that exposed underlying source materials. Google’s AI search feature notoriously suggested adding glue to pizza cheese, a “solution” that appeared in Reddit joke posts that made it into training data. More seriously, systems trained on internet forums amplify whatever biases, misconceptions, and extreme views appear in those sources.
The implications extend beyond occasional absurd suggestions. When AI tools inform hiring decisions, performance evaluations, customer service interactions, or product recommendations, biased outputs can systematically disadvantage certain groups. Unlike human bias, which varies by individual and can be directly addressed, algorithmic bias operates at scale and often remains invisible until someone specifically tests for it.
Traditional innovation processes include built-in mechanisms for addressing bias, though imperfect ones. Design thinking workshops deliberately include diverse perspectives. The “devil’s advocate” role formally challenges assumptions. Cross-functional teams bring different viewpoints and incentives. These dynamics create friction, but productive friction that reveals blind spots and challenges groupthink.
AI tools risk eliminating this friction. When individuals work alone with AI assistants rather than in diverse teams, they lose access to perspectives that differ from their own. The AI provides intelligent-sounding responses that might reinforce existing biases rather than challenging them. This dynamic becomes particularly dangerous because AI outputs carry an artificial veneer of objectivity. People assume that machine-generated analysis is neutral even when it reflects deeply biased training data.
The most productive innovation sessions I’ve facilitated combine strong expertise with genuine intellectual humility. Participants hold well-developed viewpoints but remain open to revising them when presented with evidence or alternative perspectives. They engage in constructive conflict that tests ideas without devolving into interpersonal conflict. These qualities emerge from emotional intelligence and social skills that AI systems don’t possess.
Current AI tools might help surface information or generate options, but they can’t facilitate the human dynamics that produce breakthrough insights. They can’t read when someone is disengaging from discussion, when power dynamics are preventing honest input, or when a team needs to take a step back and reframe the problem. These capabilities require understanding context, reading subtle social cues, and exercising judgment about when to push and when to redirect precisely the areas where AI remains weakest.
Some organizations are experimenting with tools that monitor meeting dynamics: tone of voice analysis, participation tracking, and facial expression interpretation. While potentially valuable for helping people understand their communication patterns, these capabilities aren’t yet reliable enough for high-stakes applications. Computer vision experts caution that emotional recognition systems can’t accurately detect sentiment across different contexts and cultures, a fundamental limitation that makes current implementations premature at best and potentially harmful at worst.
Responsible Deployment Boundaries
Not all AI applications deserve deployment, regardless of technical feasibility. Mental health support, psychological coaching, and therapeutic interventions represent particularly dangerous domains for AI systems that lack genuine understanding of human psychology and cannot fully grasp context, nuance, or crisis situations.
The challenges go beyond technical limitations. These systems operate in domains where science itself remains incomplete. Neuroscience still doesn’t fully understand how brains produce consciousness, emotions, or many mental health conditions. Psychology and psychiatry continue debating fundamental frameworks for understanding human behavior. Deploying AI tools in fields where human expertise is still evolving creates compounding uncertainty.
Joseph Weizenbaum, who created ELIZA, an early conversational AI program, never intended it for therapeutic use. When he observed people forming emotional connections with the system and confiding personal struggles, he became deeply concerned about the implications. Speaking in 2005, Weizenbaum warned that as social media expanded and conversational AI improved, the distinction between human and machine interaction would blur dangerously. People might not know whether they’re receiving advice from human experts, AI systems, or malicious actors using AI to manipulate them.
That concern has materialized. Reports of people using ChatGPT as a therapist are increasing, with some users claiming significant life improvements from these interactions. While AI systems can provide information about psychological concepts and suggest coping strategies, they lack the judgment to recognize when someone needs professional intervention, cannot detect subtle warning signs of crisis, and may reinforce harmful thought patterns rather than challenging them appropriately.
The timing of AI failure in these contexts matters enormously. Unlike productivity tools where system limitations cause inconvenience, mental health applications involve vulnerable people in potentially critical situations. An AI system might conduct helpful conversations for weeks before making a recommendation that triggers a crisis. By that point, the user has developed trust and emotional connection with the system, making them more likely to follow harmful guidance.
Human psychology compounds the problem through anthropomorphization. People form emotional attachments to cars, naming them and speaking as if the vehicle has intentions. The tendency to attribute human-like qualities to AI systems operates even more strongly because these systems use language and sometimes emulate emotional responses. Users quickly forget they’re interacting with pattern-matching algorithms rather than entities capable of genuine understanding or care.
Beyond mental health, AI systems show particular limitations in domains requiring judgment about interpersonal conflict, ethical dilemmas, or situations involving significant power dynamics. An AI might suggest a direct confrontation approach that works in some contexts but damages relationships in others. It cannot account for organizational politics, personal histories between individuals, or cultural factors that shape appropriate communication. These limitations make AI guidance potentially harmful in contexts ranging from management advice to relationship counseling to conflict resolution.
Similar concerns apply to applications involving surveillance, policing, or other high-stakes decisions affecting people’s lives. Facial recognition systems demonstrate racial bias. Predictive policing algorithms direct enforcement toward over-policed communities, creating self-fulfilling prophecies. Credit scoring models disadvantage groups with limited formal financial history. These systems embed and scale existing inequities while adding an appearance of objectivity that makes bias harder to challenge.
The solution isn’t abandoning AI development, it’s maintaining clear boundaries between appropriate and inappropriate applications. AI can assist trained professionals who maintain ultimate judgment and responsibility. It can provide information and options while humans make decisions. It can handle routine tasks while humans address exceptional cases. But deploying AI as autonomous decision-maker in domains involving human psychology, social dynamics, or individual rights crosses a line that current technology doesn’t support.
Actionable Recommendations
For Innovation Leaders
Start with operational friction points rather than moonshot projects. Identify the tedious, repetitive tasks that consume your team’s time and morale. Map workflows that involve gathering information from multiple sources, reformatting data, or updating templates. These “boring” automations deliver measurable returns quickly and build confidence with AI tools before tackling more ambitious applications.
Calculate the true cost of information work. When employees spend hours weekly on status reports, meeting notes, proposal updates, or data aggregation, that time carries direct costs in salaries and opportunity costs in work not being done. Many organizations don’t track this granular time allocation and thus underestimate the value of operational automation.
For Technology Teams
Prioritize data infrastructure before advanced applications. The sexiest AI tool delivers no value if it can’t access relevant information in usable formats. Investing in data standardization, integration, and governance creates the foundation for every subsequent AI initiative. This work lacks glamour but determines success or failure of everything built on top.
Build composable workflows rather than monolithic systems. String together specialized tools through integration platforms instead of attempting custom development for every use case. This approach allows rapid experimentation, reduces vendor lock-in, and lets you benefit from continuous improvements across multiple tools.
For Executives
Reconsider innovation portfolio allocation. The standard model of funding discrete “innovation projects” works poorly for accumulating operational improvements. Create mechanisms for continuous optimization work that doesn’t require project charters and executive sponsorship for each small enhancement.
With nearly 90% of notable AI models in 2024 coming from industry rather than academia, vendors will continue delivering new capabilities rapidly. Organizations that build strong data foundations and establish workflows for adopting new tools will compound advantages over time while competitors remain locked in transformation programs.
Establish clear boundaries for AI deployment. Define categories of decisions or interactions where AI should assist rather than replace human judgment. Make these boundaries explicit in policies and implementation guidance. When teams understand where guardrails exist, they can move faster within those bounds.
For All Organizations
Address the skills gap through augmentation rather than replacement. The question isn’t whether AI will eliminate jobs, it’s how to redeploy human capability toward higher-value work. Employees freed from routine tasks need clear paths to apply their expertise differently. Without deliberate transition management, automation creates organizational disruption without capturing productivity gains.
Measure operational improvement as rigorously as product innovation. Most organizations track innovation pipeline metrics but lack visibility into operational efficiency gains. When automation projects lack clear success metrics, teams can’t learn what works or justify additional investment.
The Paradox of Innovation at Scale
The central paradox of AI-driven innovation is that organizations best positioned to benefit often face the highest barriers to adoption. Large enterprises possess extensive data, significant resources, and compelling use cases but they also struggle with technical debt, organizational complexity, and ingrained processes that resist change.
Meanwhile, smaller organizations with less sophisticated infrastructure can move faster precisely because they have less to transform. An eight-person agency can rebuild its entire operations stack in months. A Fortune 500 company needs years just to establish governance frameworks and security protocols for AI tools.
This dynamic creates opportunity for disruption not through superior technology but through superior ability to adopt technology rapidly. The competitive advantage goes not to organizations that develop the most advanced AI capabilities but to those that most effectively deploy commercially available tools to eliminate friction from their operations.
The winners in this environment will be organizations that resist the temptation to pursue transformative AI projects before mastering operational AI applications. They’ll invest in data infrastructure even when it lacks executive appeal. They’ll measure and celebrate time savings from automation even when those savings don’t generate press releases. They’ll view AI as augmentation of human capability rather than replacement of human workers.
Most importantly, they’ll recognize that innovation isn’t about the sophistication of tools or the novelty of applications, it’s about systematically creating more value for customers and stakeholders. Sometimes that comes from revolutionary breakthroughs. More often, it comes from compounding hundreds of small improvements that collectively transform what an organization can accomplish.
The boring stuff matters. It always has. AI simply makes it more obvious by revealing how much organizational effort gets consumed by activities that machines can handle. Organizations that embrace this reality will capture the productivity gains that everyone else is too busy chasing moonshots to notice.