Gray Area Between Computing Paradigms
One of the things that I spend a lot of time thinking about is the gray area between computing paradigms — especially with the rise of what I like to call “generally specialized computing”. Whether you spend your time on campus networks, in the cloud, mining cryptocurrencies, enabling smart cities, or coaxing your algorithms into learning to drive autonomously, the need for powerful and complex computing is omnipresent. Just as the “open office” plan aims to foster unexpected collaboration between workers, the modern compute platform is a hybrid mix of acceleration, high bandwidth, and parallel and distributed tasks. Hybrid computing is here to stay. What hasn’t changed, however, is the terminology experts use to describe their work, and often to segment themselves and others. High performance computing has always been a niche where creativity in engineering flourishes, and it’s high time we widen our vocabulary to become more inclusive of all its aspects.
The “Humpty Dumpty” Dilemma
I moved into high performance shared-memory computing from hyperscale distributed computing to solve (what I thought would be) a simple problem. I needed to fit more data in memory so that I could simultaneously persist, query, and modify that data as fast as the new stuff was coming in. The hyperscale world became very good at performing atomic transactions in parallel to effect tremendous throughputs. Indeed, their design patterns for doing so became a key definition of “hyperscale” itself. To make these atomic speedups, we spent a lot of time breaking Humpty Dumpty into pieces (now we call this sharding) and ensuring that we could use all of those pieces in client applications concurrently. This was great for atomicity and resiliency. But trying to understand the sum of the parts meant that we needed to put Humpty Dumpty back together again. And we faced a choice: either work to re-assemble the pieces at a rate that can keep up with the influx of new and changed data or query a re-assembled dataset. We could not do both. At least not with the hardware paradigms we had been working on until then. I left that paradigm to explore options in vertical, parallel, and memory coherent computing to help solve that dilemma.