CES 2026: When the Future We Built 10 Years Ago Finally Arrived
At CES 2026, discover how today's "new" tech trends were envisioned a decade ago. Explore the compute abundance paradox and democratized creativity, transforming our world. Learn more!
What happens when you spend a week at the world’s largest tech conference and realize you’ve seen it all before?
Walking out of the CTA Tech Trends presentation on day one of CES 2026, I had a surreal moment. Every trend they highlighted - personalization, platform ecosystems, connected spaces, digital health, precision healthcare - these were the exact things we were prototyping at Accenture Liquid Studios a decade ago.
My first thought: “Wow, we were really ahead of our time.”
My second thought: “Wait... why are these STILL the trends?”
Then it hit me: The work we did back then was exploring 5-10 year horizons. Technology has finally caught up. These aren’t future trends anymore - they’re active deployments happening right now.
And that realization reframed everything I saw over the next four days.
The Compute Abundance Paradox
Walking the AMD and NVIDIA exhibits, I watched demos showcasing rapidly declining costs for token generation - both monetary and computational. Processing efficiency in data centers is advancing faster than our ability to consume it.
Here’s the paradox: We’re building infrastructure for today’s computational requirements, but those requirements are dropping faster than we’re building capacity.
If inference optimization continues (or we move to entirely new fluid architectures), we might find ourselves in 2-3 years with massively excess capacity. We’re scaling for scarcity that may not exist.
But here’s where it gets interesting:
The AMD keynote showcased video generation, animation, and world-building capabilities that would have required render farms just a few years ago. The creative workflow is transforming:
Vision in your head
Rapid prototyping in minutes
Iterative refinement
Bridge to reality
We’ve democratized creativity. And if compute abundance becomes reality, we’ll unlock exponentially more creative output from people who were previously constrained by technical barriers.
What really excites me? Both NVIDIA and AMD are enabling developers to develop locally - not just with LLMs, but with multimodal action models, voice and language models, models that process sensor fusion data and react in real-time. Full-stack AI prototyping capability is sitting on your desk now.
Human-Readable Code is Dead (And That’s Okay)
A conversation with a former Liquid Studios colleague fundamentally shifted how I think about AI-generated code.
His take: “Human-readable code is dead.”
My initial reaction was defensive. But then he walked me through the history of abstraction:
Assembly → C → JavaScript → TypeScript → AI-generated code
We’ve always been abstracting away from “readable” lower-level code. Nobody’s writing raw assembly saying “this is the ONLY way.” We accept that compilers have bugs, memory leaks, and inefficiencies. We constantly update them with better patterns.
JavaScript is already a huge abstraction. TypeScript adds another layer. AI is just the next abstraction layer.
The shift in how we evaluate code:
OLD CRITERIA: Is this human-optimized, readable, well-commented?
NEW CRITERIA: Does this achieve the desired result with predictable outcomes? Are edge cases articulated thoroughly? Does it meet requirements and vision?
Here’s what makes me addicted to “vibe coding”: the ability to visualize what I want to happen and turn vision into reality in minutes.
The pushback I hear: “But AI code isn’t efficient!”
Neither was early compiler output. We improved it. The same will happen here.
The real question isn’t whether AI can write perfect code today. It’s whether you’re ready to shift your evaluation criteria from “is this how a human would write it?” to “does this work reliably?”
The Dark Factory Principle: Why Humanoid Robots Miss the Point
At CES 2026, I saw dozens of humanoid robots. Arms. Hands. Walking. Picking up boxes.
And I think we’re sandbagging progress.
A conversation about “dark factories” reframed everything. A dark factory is so automated you don’t need lights - because no humans enter.
Here’s the problem with humanoid robots: We’re building robots with arms to pick up boxes and move them to other places.
We’re automating human processes instead of eliminating them entirely.
Why move boxes at all? Why have discrete pickup/dropoff points? Why design systems that require human-shaped movement patterns?
This is the same mistake we made early in digital transformation: replicating paper processes in software instead of reimagining the workflow.
The real breakthrough isn’t making robots work like humans. It’s designing systems where human-shaped work isn’t necessary.
The pattern: We’re at an inflection point. Stop asking “how do we automate what humans do?” Start asking “what would this look like if humans were never part of the equation?”
The companies that figure this out won’t just be more efficient. They’ll be operating in a completely different paradigm.
AI Companions: From Isolation to Community (Why I Was Wrong)
I’ve been publicly concerned about AI home companions for months. At CES, a conversation with the Aviden team completely changed my perspective.
My fear: AI companions would anchor isolated seniors at home, making the loneliness epidemic worse.
Here’s what I missed: AI companions as stepping stones, not destinations.
Think about the bookends of aging well:
Isolated: At home, sedentary, disconnected
Thriving: Active, community-engaged, social
I assumed AI companions locked people into the isolated end.
The Aviden model showed me a graduated re-integration approach:
Phase 1: Break the isolation habit - promote movement in small steps, build micro-habits of engagement, lower the activation energy to re-enter community.
Phase 2: Virtual bridges - connect with other users virtually first, build comfort with social interaction, create shared experience and identity.
Phase 3: In-person community - facilitate real-world meetups. The AI becomes a vehicle to get started. The community becomes “so much more.”
This reminded me of my parents. They lived in the suburbs for years. Isolated. Never talked about neighbors. Then they got a dog. Now they know everyone in the neighborhood.
The dog didn’t replace community - it created a reason to engage with it.
Can AI intentionally reinforce beneficial behaviors and get people back into their communities? If designed right, yes.
This aligns with everything I believe about AI: augment humans, strengthen human connection, don’t replace it. The best AI interventions create scaffolding for behavior change, then gradually remove themselves as the human capability strengthens.
The Biometric Data Paradox: Why More Data Doesn’t Mean Better Healthcare
At CES, I spoke with HaloScape about the future of longitudinal health data. We’ve seen this vision before. The challenges haven’t changed.
The pitch: Collect biometric data continuously. Share it with healthcare providers. Enable better outcomes.
The problem: More data ≠ better outcomes.
Healthcare providers have very limited time. Overabundance of information creates cognitive overload. The pushback is valid:
“I don’t trust this data.”
“I can’t see how to use this data.”
“I don’t have time to interpret this.”
Here’s what we’re missing: qualitative context.
We obsess over quantitative metrics while ignoring qualitative measures that provide critical context:
Habits and behavior patterns
How patients are feeling (subjective experience)
Journal entries
“Little pains” they forget to mention
Social determinants of health
Biometric data without qualitative context is like having vitals without knowing the patient just ran up three flights of stairs, or seeing elevated heart rate without knowing about housing instability.
This came up in my webcast with Dr. Garrett Sessel months ago. The critical information gets lost when we focus purely on quantitative metrics.
The real opportunity for AI in healthcare isn’t autonomous diagnosis or data collection at scale. It’s AI that synthesizes biometric data with qualitative context and presents providers with actionable insights - not data dumps.
The functional capabilities are there. The last mile is figuring out how to make it operationally viable in real clinical workflows.
Sound familiar? That’s the pattern across every CES trend I saw.
Physical AI & The Last Mile Problem
CES 2026 was full of “AI-powered” everything. But the thing that genuinely excites me? Physical AI.
AI that interacts with real-time, real-world data. Understands its environment. Reasons through the next best action.
This is the next evolution: edge + air-gapped AI that operates autonomously with logical reasoning. No cloud dependency. Real-world consequences. And yes, legitimate concerns we need to explore.
But here’s what I keep seeing across immersive reality, robotics, and physical AI deployments:
✅ Functional capabilities: There
❌ Operational viability: Not quite
The last mile challenge:
Battery life constraints
Device management in the field
Charging infrastructure
Tracking distributed hardware at scale
The tech works in demos. Managing these devices in production at scale? That’s still the blocker.
This is the pattern I saw everywhere at CES:
Code generation: Capability ✓ | Trust in production ✗
Healthcare AI: Capability ✓ | Provider workflow integration ✗
Biometric monitoring: Data collection ✓ | Meaningful synthesis ✗
Immersive tech: Functional ✓ | Field management ✗
There’s a gap between “technically possible” and “operationally viable at scale.”
The Meta-Pattern: Removing Constraints That Are Now Obsolete
Looking back across every conversation, every keynote, every demo, a single theme emerges:
Every meaningful advancement at CES 2026 was about removing intermediaries and constraints that were historically necessary but are now obsolete.
Between thought and code
Between human process and automation
Between creative vision and output
Between developer and deployed AI model
Between isolation and community
Between data collection and clinical insight
The technologies we prototyped 10 years ago assumed these constraints were permanent. They’re not.
What This Means for Leaders
If you’re making 5-10 year bets today, remember: Innovation timelines are longer than we think, but the acceleration at the end is faster than we expect.
Success requires more than technology capability - it requires ecosystem readiness, trust-building, workflow integration, and operational viability at scale.
The vendors will show you what’s possible. Someone needs to tell you what’s actually ready for your environment, your workflows, your constraints.
Three questions to ask about any “AI innovation” you’re evaluating:
Is this automating a human process, or eliminating the need for that process entirely? The latter is where transformational value lives.
What’s the gap between technical capability and operational viability? Battery life, device management, provider trust, workflow integration - these “last mile” problems kill more innovations than technical limitations.
Does this create more human connection or less? The best AI interventions strengthen human capability and community, then gradually remove themselves as scaffolding.
The Future We’re Building Now
The future we imagined in 2016 arrived in 2026. What are you building now that won’t fully materialize until 2036?
Because I guarantee you: It’s going to take longer than you think. And when it finally arrives, it’s going to happen faster than you expect.
The companies that survive that transition will be the ones who understood the difference between what’s technically possible and what’s operationally viable - and had the patience to bridge that gap thoughtfully.
AJ Bubb is the founder of MXP Studio, a fractional AI strategy consultancy, and host of the Facing Disruption podcast. He helps mid-market companies navigate AI adoption strategically, drawing on 15+ years of innovation leadership from AWS, Accenture, and building emerging tech solutions that actually make it to production.

