I keep seeing what is happening in NAND and memory described as a shortage, or worse, as another cycle that will eventually correct itself, and that framing misses what is actually going on, in my opinion. Many are using this as leverage to create panic, which may not serve the purpose in the long-term, and frankly, I think we need to step back and look at what’s really happening here rather than falling into the familiar pattern of treating every supply constraint like it’s 2021 all over again. I started writing this article about three weeks ago, but work got very busy, and it took a back seat to my priorities. Last night I took time to put together all of my thoughts and wanted to offer my insights on what I’m seeing in the market.
Let’s be clear
Nothing fundamental broke in the supply chain, and I’m sorry to be the bearer of good news to the naysayers, but the fabs did not suddenly lose the ability to manufacture NAND. What changed is intent, and that is a business decision, not a supply issue. What we’re seeing is capital investment being deliberately redirected toward the kinds of memory that feed AI systems because that is where the demand is predictable, the margins are higher, and the long-term commitments already exist. This is not some mysterious force of nature or an unforeseen disruption, this is manufacturers looking at their capacity, looking at where the money is, and making rational economic decisions about where to invest their next dollar.
Building on EE Times reporting that chipmakers are shifting wafer capacity toward high-margin HBM and DRAM and away from lower-margin NAND, it is clear that HBM and advanced DRAM now generate far more economic value per wafer than traditional NAND. The constraint is not that production lines are literally being ripped out and retooled overnight, NAND and DRAM manufacturing processes are quite different, often in separate facilities with different equipment and different process technologies, so it’s not like you can just flip a switch and turn a NAND fab into an HBM fab. The constraint is that new capacity investment, cleanroom expansions, and advanced tooling are going almost exclusively toward HBM and AI-optimized memory, while legacy NAND fabs remain operational but are aging in place. They are not being refreshed, expanded, or modernized at anything close to the rate they were even three years ago, and that tells you everything you need to know about where the industry sees its future. Industry capex data show that new capacity investment, cleanroom expansions, and advanced tooling are now heavily skewed toward HBM and AI‑optimized DRAM, while NAND fabs largely continue with limited, cautious investment rather than major expansions, which may be a pattern that indicates where the industry sees its future.
If you accept that reality, the rest of what we are seeing starts to make sense, almost inevitably so. Pricing volatility, unpredictable availability, and the guidance many enterprises are now getting to plan around constraints rather than expect normal ordering behavior are all downstream effects of that shift in capital allocation and strategic priority. These are not anomalies or temporary disruptions, they are the natural consequences of a market that has fundamentally reoriented itself around a different set of workloads and customers.
This is why I struggle with the idea that this will simply revert to the patterns we saw post-pandemic, or other events, that we’ll go through a typical cycle correction and everything will normalize back to abundant NAND availability at commodity pricing. AI is not a transient workload that will peak and fade like some previous technology trends we’ve seen come and go. It is changing how memory is consumed and prioritized at a fundamental level, and once capital investment, production capacity, and engineering focus are optimized for that world, there is very little economic incentive to swing them back. New fabs take years and billions of dollars to bring online, and that investment is not going toward NAND, it is going toward the memory architectures that underpin AI infrastructure. And increasingly, those allocations are locked in through multi-year contracts with hyperscalers and cloud providers before a single wafer is even produced, which means the capacity that might have served general enterprise needs in the past is already spoken for.
However, rest assured, NAND does not disappear, and I want to be clear about that because I’m not predicting some catastrophic end to NAND production. But it no longer sits at the center of gravity it once did when it was the default answer for anything requiring persistent storage at scale. What we are seeing now mirrors what happened with GPUs, a component that was once broadly available, became a strategic input to a specific, high-value workload, and supply chains reorganized accordingly, with allocation priorities shifting toward customers who could commit to large volumes and long-term contracts. Memory follows the same path, and pretending otherwise because it makes our planning easier does not change the underlying reality.
Uncomfortable Realities
What makes this uncomfortable for many infrastructure plans is that it forces a different conclusion than the one most organizations have been operating under for the last decade. Memory is no longer behaving like a neutral commodity that you can order on demand and expect predictable availability and pricing. Rather, it has become a strategic input, and manufacturers are treating it accordingly, making allocation decisions that reflect long-term bets on where the industry is headed rather than short-term responsiveness to spot-market demand. When that happens, everything downstream feels less predictable, even if nothing is technically “broken” in the traditional sense of production failures or quality issues.
So, this is not really about coping mechanisms or clever tiering strategies that let you work around the edges of the problem, though those tactics certainly have their place in near-term planning. It is about recognizing that the underlying assumptions most enterprises still carry about memory availability are anchored to a world that is already fading, where NAND was abundant, and manufacturers competed aggressively for your business with pricing and availability. The center of gravity has moved toward AI infrastructure and the memory architectures that support it, and planning as if it has not, as if this is just another cycle that will eventually correct itself back to the old normal, is where the real risk now sits for organizations that depend on predictable access to memory technologies.
