HPC shops are used to doing math – it is what they do for a living, after all – and as they evaluate their hybrid computing and storage strategies, they will be doing a lot of math. And a lot of experimentation. And that is because no one can predict with anywhere near the kinds of precision that HPC shops tend to like just how their applications and their users are going to make use of on premises and cloud capacity.
This may all settle out as experience builds, but right now – in the early days of practical HPC processing and data storage in the cloud – there are more hard questions than solid answers. Indeed, it may never precisely settle out, because everything is always changing. Cloud brings many new and interesting options, even if it does add complexity.
While HPC centers are understandably focused on the amount and the nature of the compute that they can bring to bear both from their own clusters and those they rent from cloud-service providers, it’s data that’s driving the choices people make on cloud and that’s helping seed uptake of hybrid.
“To a large extent, the location of the data and its size really determines the location of the compute,” Rob Lalonde, vice president and general manager of the cloud division at Univa, tells The Next Platform. “If there is a petabyte dataset sitting on premises or in the cloud, the odds are that you are not going to move it. If a job is suddenly very high priority and the dataset is small, then you may move it off the cluster on premises and up to the cloud where a lot more compute resources are available to get it done faster – provided the job scales, of course.”