Originally published on LinkedIn, March 18, 2025
When I wrote “AI Market Inflection Point: Hype, Reality, and What Comes Next,” on March 5, 2025, I laid out some hard truths about AI adoption. I highlighted how enterprises were moving beyond hype, that AI infrastructure investments needed to be optimized, and that storage architectures had to evolve to keep up.
I wasn’t expecting to write an update 13 days later, but here we are and now we’re seeing those exact shifts play out across the industry at an incredible pace. This follow-up highlights how recent trends align with what I covered in my last opinion article and what they mean for AI adoption moving forward.
The Patience Imperative in AI Adoption
What I Said Then: AI adoption isn’t about “if” anymore, it’s about “how fast” and “at what cost.” Enterprises rushed into AI, often without a clear path to ROI. I predicted that this would lead to a shift from AI experimentation to AI accountability.
What’s Happening Now: At the HumanX 2025 conference (March 10-13), industry leaders reinforced this perspective, emphasizing that AI investments require a long-term strategy. Businesses are realizing that they need to integrate AI in a way that delivers real, measurable outcomes, not just flashy demos.
What It Means Going Forward: The pressure is on vendors to prove value. Companies that built AI strategies around hype will struggle, while those that focused on business-driven AI adoption will pull ahead.
The GPU Gold Rush is Cooling, & The Shift Toward AI Orchestration
What I Said Then: Companies poured money into GPUs, expecting immediate returns, only to find themselves with underutilized hardware. I pointed out that success in AI isn’t about simply buying more GPUs, it’s about orchestrating them effectively.
What’s Happening Now: This shift is happening in real time. The industry is transitioning from a “buy more GPUs” mentality to one that focuses on optimizing AI workloads. The emphasis has shifted from raw computing power to the efficiency of AI pipelines, ensuring that computing, storage, and networking work together effectively. Cloud and hybrid cloud architectures are playing an increasingly important role. Amid ongoing supply chain constraints and GPU shortages, companies are leveraging cloud-based GPU resources to train models at scale before deploying those models on-premises with their GPUs. While customer-owned GPUs may not match the speed of hyperscaler resources, this approach balances cost, performance, and accessibility, delivering the results businesses need without over-investing in infrastructure.
What It Means Going Forward: AI success will be defined by how well companies optimize their infrastructure, not just how much they invest in hardware. Organizations that adopt a hybrid approach, leveraging cloud for model training and on-premises GPUs for inference, will have a major advantage. Companies that continue relying solely on large, upfront hardware investments risk running into supply chain bottlenecks, while those that embrace dynamic AI infrastructure will maintain agility and cost control.
Data Gravity & AI Storage Evolution
What I Said Then: AI performance isn’t just about GPUs, it’s about how quickly data can be accessed, moved, and processed. I stressed that data gravity was a bigger bottleneck than compute and that storage needed to evolve.
What’s Happening Now: We’re seeing a major shift in how AI data is managed. The rise of “product engineers”, hybrid roles that blend product management with deep technical expertise, shows that companies are rethinking AI storage. They’re focusing on data mobility, accessibility, and governance, not just storage speed alone.
What It Means Going Forward: AI-ready storage isn’t just about any one factor, it requires a balance of performance, flexibility, and scalability. Without high performance, AI models stall. Without flexibility, AI workflows break. Without scalability, growth is unsustainable. Companies that embrace intelligent data orchestration, optimizing all three, will have a real competitive edge.
The End of AI Experiments & The Demand for ROI
What I Said Then: CFOs weren’t going to keep signing off on AI budgets without proof of ROI. I predicted enterprises would move from “let’s try AI” to “let’s prove AI delivers.”
What’s Happening Now: A recent industry survey by Writer, published in BusinessWire on March 18, 2025, revealed that 68% of executives believe AI adoption has caused divisions within their companies, with 42% stating that AI is “tearing their company apart.” This growing disconnect between leadership and employees underscores the importance of aligning AI strategies with business goals while addressing internal concerns. Executives want to push AI forward, but employees are skeptical. The result? Companies are forced to scrutinize AI investments more closely, ensuring they align with real business outcomes.
What It Means Going Forward: Financial discipline will drive AI adoption. Vendors that can’t show a direct connection between AI and business impact will struggle to gain traction.
The CXL Disruption & AI Infrastructure Optimization
What I Said Then: Compute Express Link (CXL) is a game-changer. It has the potential to solve memory bottlenecks, improve GPU utilization, and reduce wasted resources. However, I warned that software readiness would be a challenge.
What’s Happening Now: While CXL adoption isn’t mainstream yet, the broader shift toward AI workload optimization is laying the groundwork for it. However, software support and enterprise adoption remain slow, with most organizations still in the early exploration phase rather than actively deploying CXL-enabled architectures. The concept remains promising, but real-world implementation lags behind other AI infrastructure advancements. The focus is moving away from brute-force compute power and toward smarter infrastructure design.
What It Means Going Forward: AI architectures that embrace memory disaggregation and dynamic resource allocation will be ahead of the curve when CXL takes off.
Since we aren’t seeing CXL adoption take off quite yet, let’s look into the crystal ball and see what may be possible. I have three ideas: one is a safe bet, the second is a bit more outlandish and visionary, and the third is definitely edgy and hardcore.
-
Safe Prediction: CXL Adoption Will Gradually Increase in Data Centers – As data-intensive applications like AI and machine learning continue to grow, the need for efficient memory utilization becomes critical. CXL offers a solution by enabling memory pooling and coherent interconnects between processors and accelerators. Over the next few years, we can expect a steady increase in CXL adoption within data centers to enhance performance and scalability.
-
Visionary Prediction: CXL Will Enable Disaggregated Server Architectures – CXL’s ability to decouple memory from specific processors paves the way for disaggregated server architectures. In this model, compute, memory, and storage resources are modular and can be dynamically allocated based on workload demands. This flexibility could lead to more efficient resource utilization and reduced operational costs.
-
Edgy Prediction: CXL Will Make Traditional Memory Hierarchies Obsolete. As CXL matures, it could fundamentally disrupt existing memory hierarchies by allowing for more flexible and efficient memory architectures. This shift might lead to reevaluating current computing paradigms, potentially rendering traditional memory hierarchies obsolete.
I believe CXL will help improve the computing landscape. What it holds for the future, beyond pooling memory to provide AI workloads access without buying more GPUs, is just the beginning. I’d like to hear your thoughts about CXL in the comments.
The Bottom Line: AI Pragmatism Wins
Looking back, it’s clear that many of the trends I covered just days ago are now shaping the AI landscape. AI isn’t just about more GPUs or bigger models, it’s about building infrastructure that scales intelligently and delivers real business outcomes.
If you’re looking to:
-
Optimize AI workloads without over-investing in underutilized hardware
-
Solve data gravity challenges with a modern, AI-ready storage architecture
-
Future-proof your AI infrastructure with scalable, cost-efficient solutions
Now is the time to take a strategic approach. AI success isn’t about luck, it’s about having the right strategy.
