Every once in a while, someone asks why my writing about AI tends to sound different than most of what is circulating in the market. While much of the conversation today revolves around models, parameters, and the latest benchmark results, my perspective tends to return to infrastructure, memory pathways, and data movement.
The answer is simple. It comes from where I spent most of my career.
For almost three decades, I have lived in the world of data infrastructure. Storage systems, distributed architectures, file systems, object platforms, memory hierarchies, and large-scale data movement, such as backup and recovery. When you spend that long watching how information moves through systems, you eventually begin to see technology through a very particular lens.
To me, technology has always looked a lot like plumbing, which is why plumbing analogies show up so often in my writing.
Technology Is Plumbing
That may sound overly simple, but it is one of the most practical ways to understand technology architecture. I call it the blue-collar way of explaining complex systems. Plumbing is not glamorous, just like backup is not glamorous. Neither one makes headlines, but both determine whether the building actually works. If the pipes are poorly designed, the entire structure struggles. If backup fails when you need to recover, the entire organization struggles. When the pipes are designed well, everything above them functions smoothly. In both cases we are talking about data movement. When that movement is predictable and consistent, nobody knows your name. But the moment it stops, or becomes fragile, suddenly everyone not only knows your name, they know your middle name, your last name, and quite possibly your mother’s maiden name.
The Amdahl Perspective
Early in my career, in the late 1980s and early 1990s, I was running a fairly large Novell environment and purchased a few NetFrame systems for our data center. That decision unexpectedly led to several opportunities to speak directly with Carl Amdahl. Those conversations left an impression on me that has never really faded. Carl carried a remarkable intellectual lineage from his father, Gene Amdahl, who was one of the architects of modern computing and the originator of Amdahl’s Law, a principle that explains how the overall performance of a system is constrained by its slowest component.
What struck me during those conversations was not simply the mathematics behind Amdahl’s Law. It was the way Carl thought about systems. He looked at them the way an engineer studies a pipeline. Where is the restriction, where is pressure building, and where is the flow breaking down. Once you begin to think about computing that way, it becomes very difficult to see it any other way.
Amdahl’s Law can certainly be expressed with equations, but it can also be explained with plumbing. If one section of pipe is narrow, it does not matter how wide the rest of the system is. The flow will always be constrained by that narrow point. Computing systems behave exactly the same way. And not surprisingly, so do AI systems.
What Backup Teaches You About Systems
That perspective was reinforced even more during the years I spent working in data protection. Someone once told me they didn’t want to be the backup admin because, “backup is not sexy.” Well, backup and recovery may not be sexy or sound glamorous, but they teach you a great deal about how systems really behave. When you are responsible for protecting enterprise environments, the challenge is not simply storing data safely. The challenge is moving enormous volumes of data through the system fast enough to meet backup windows and recovery objectives.
Customers were constantly asking the same questions, such as, how do we make backup faster, or how do we reduce the time required to move data from production systems into protected storage? The answer almost always came down to the same set of principles. Eliminate bottlenecks, widen the pipes (increase bandwidth), or reduce friction in the data path. Friction is key. When we wanted to measure raw data movement speed, we would create a backup storage unit that pointed to /dev/null – the infinitely fast repository. We did this to remove all friction and understand the true baseline.
Whether the constraint was network bandwidth, disk throughput, metadata coordination, or CPU cycles handling deduplication, the lesson was always the same. The system moves at the speed of its narrowest point. After working on those problems long enough, you stop thinking about systems as collections of features. You start thinking about them as flows – how data moves, where it stalls, and where it accelerates.
Why AI Looks Different From the Infrastructure Layer
When people talk about AI today, the conversation usually begins at the top of the stack. It focuses on models, parameters, benchmarks, and the impressive demonstrations that capture headlines. But if you have spent decades working in infrastructure, your instinct is to look somewhere else first. If you’re like me, you look at the pipes. You look at how the data moves through the system, and you pay attention to the memory hierarchy. You look at the pathways that feed the processors and the orchestration layers that coordinate massive datasets across clusters.
Because if the data cannot move through the system efficiently, the intelligence sitting on top of that system becomes irrelevant. A GPU that cannot be fed data quickly enough is just an expensive piece of silicon waiting for work. The result is that training pipelines stall, inference latency creeps upward, and costs increase as systems sit idle waiting for the next batch of information to arrive. This is the same challenge supercomputers face. If the cores are not being fed data consistently, the system quickly becomes a very expensive waste of money.
The Largest Data Movement Problem Ever Built
From the outside, AI often looks like a software revolution, but from the infrastructure layer, it looks like the largest data movement challenge the industry has ever faced. Training data must move through distributed storage systems. Data loaders must continuously feed thousands of GPUs, memory hierarchies must support massive model states, while checkpointing systems must capture enormous datasets quickly enough to protect long-running training jobs in the event of a failure, and inference systems must respond in real time while coordinating data across clusters.
All of this is plumbing, and admittedly not glamorous plumbing. But it’s the kind that determines whether a system actually works at scale and how quickly AI can deliver results.
This is also why certain technologies draw my attention earlier than they seem to attract attention in the broader market. When you view computing through the lens of data flow, developments like memory fabrics, CXL, distributed caching layers, and data orchestration architectures become incredibly important. They are not simply incremental infrastructure improvements. There are ways of widening the pipes, and that changes everything.
The AI conversation will continue to evolve – that’s a given. Models will continue to improve, capabilities will expand, and entire industries will adapt around these tools. But beneath all of that progress, the same principle that Gene Amdahl articulated decades ago still applies. Every system ultimately runs at the speed of its narrowest bottleneck. As one of my bosses told me many years ago, “We’re only as strong as our weakest link.” Effectively, that statement defines Amdahl’s Law.
“We’re only as strong as our weakest link.”
If you spend enough years watching how data moves through systems, you start to recognize that pattern repeating itself across every generation of computing. Mainframes, client-server systems, cloud platforms, and now AI infrastructure all eventually converge on the same fundamental question. How fast can the system move information from where it lives to where it is needed? Artificial intelligence may represent the most advanced software systems humanity has ever built, but the physics of computing have not changed.
The reality is, every system still depends on something surprisingly simple.
The flow through the pipes.
