Originally published on LinkedIn, March 5, 2025
Too much of the AI conversation is noise. Let’s face it, hype doesn’t drive business outcomes. I’ve built my career on identifying the market trends and signals that actually move the needle for businesses. Maybe that’s pragmatism, or just the ability to tune out distractions and focus on what truly matters. With AI evolving at a rapid pace, cutting through the noise has never been more critical. In that spirit, and given my focus on AI at Hitachi Vantara, here are my high-level observations.
One thing is clear: customers are beyond being dazzled by “shiny objects.” They’re ready for practical business outcomes.
AI adoption is no longer a question of “if”, but of “how fast” and “at what cost.”
Over the past year, enterprises have raced to integrate AI, driven by GenAI breakthroughs, hyperscaler investments, and competitive pressures. But as the dust settles, a new challenge emerges: how do businesses transition from proof-of-concept AI to real, ROI-driven AI solutions at scale?
Key Observations from the AI Infrastructure & Storage Market:
-
The “GPU Gold Rush” is cooling – Many companies poured money into AI hardware, particularly GPUs, anticipating an immediate return. But now, underutilization is rampant. Data centers are filled with expensive accelerators waiting for optimized workloads. The industry is shifting from a “buy more GPUs” mentality to optimizing AI pipelines, ensuring compute, storage, and networking are working together efficiently. The companies winning in AI aren’t just deploying GPUs, they’re orchestrating them intelligently to avoid waste.
-
Data Gravity is real – AI isn’t just about GPUs. The true bottleneck is how quickly data can be accessed, moved, and processed. Enterprises are realizing that latency, not compute, often dictates AI performance. This forces organizations to rethink their storage architectures, file/object strategies, and AI data lakes. AI workloads demand high-throughput, low-latency access to massive, unstructured datasets, and many legacy storage solutions weren’t designed for this. The industry is now focusing on bringing data to compute efficiently rather than relying on brute-force hardware.
-
Beyond AI “experiments” – The early adopters rush to implement AI led many companies to invest first and ask questions later. Now, CFOs are demanding measurable ROI before greenlighting further AI spending. The result? Enterprises are scrutinizing their AI use cases, infrastructure investments, and operational costs. AI vendors that cannot connect their solutions to real business outcomes will struggle. The shift is clear: from “let’s try AI” to “let’s prove AI delivers.” Those who ignored cost efficiency in favor of rapid AI adoption are now being forced to rethink their approach.
-
Storage vendors are pivoting. But not all pivots are equal – Some storage vendors are scrambling to align their value props with AI workloads. But the reality? Not all storage is AI-ready. Many vendors simply retrofit existing storage solutions with faster flash or add GPU integrations and call it “AI-optimized.” The real challenge isn’t just speed, it’s data accessibility, mobility, and governance across the AI pipeline. AI workloads require seamless data orchestration from edge to core to cloud, yet most storage offerings still operate in silos. And some vendors are taking a hardware-first approach, pushing AI-specific storage appliances that lock customers into proprietary formats and rigid architectures. Others recognize that true AI infrastructure must be open, scalable, and workflow-driven, bridging the gap between raw performance and real business outcomes. The market is ripe for consolidation and differentiation. The question isn’t just who has the fastest storage, but who can enable AI at scale without compromising flexibility, cost, or long-term data strategy.
-
CXL is on the horizon – AI/ML models are pushing the limits of memory bandwidth and scalability. Compute Express Link (CXL) is emerging as a potential game-changer by decoupling memory from processors, enabling disaggregated, shareable, and scalable memory architectures. No more DRAM bottlenecks. No more wasted resources. Today, AI hardware is often constrained by memory limitations, forcing organizations to overprovision costly resources just to keep AI models running efficiently. CXL changes this equation by enabling dynamic memory pooling across CPUs, GPUs, and accelerators, potentially leading to significant cost savings and performance gains. However, adoption won’t happen overnight. The biggest challenge is software readiness. AI infrastructure teams will need to rethink their memory management, workload scheduling, and orchestration models before CXL sees widespread deployment.
The next 12-18 months will separate AI hype from AI reality.
And I do believe CXL (#5) will have a major impact on GPU utilization (#1), accelerating the shift from raw AI hardware investments to true infrastructure optimization.
Enterprises that balance cost, performance, and scalability, rather than chasing the next shiny object, will come out ahead.
The Bottom Line
AI isn’t just about more GPUs or bigger models, it’s about building infrastructure that scales intelligently and delivers real business outcomes.
The next 12–18 months will separate AI hype from AI reality. Enterprises that balance cost, performance, and scalability, rather than chasing the next shiny object, will come out ahead.
AI success isn’t about luck. It’s about strategy.
