The high-performance computing (HPC) industry is at an exciting growth stage. New application deployment models (such as AI and machine learning), new cloud-service offerings and advances in management software are fueling the industry. As more companies migrate HPC applications to the cloud, this blog series will feature those making the switch to share best practices and more. This interview spotlights Randy Herban, an engineer at Sylabs.
Q: Tell us about your solution and how does it benefit organizations in HPC?
Sylabs, and our product Singularity specifically, was born out of the natural lab space. When Docker was first introduced, people commented that they found it interesting but weren’t sure how to use it, especially as the shift to containers was happening at the same time, and it lacked key security features. As a result, Singularity was built from the ground up for secure workflows in high performance computing (HPC), whether it be artificial intelligence, machine learning, EDA, fluid dynamics, and financial services. In general, we went broad. You name it, and that’s what we’re here for. Being able to take containers that we are building and guarantee they are reproducible throughout the lifecycle on the run (whether it be six weeks or six months) is crucial. Another huge benefit to Singularity users is the peace of mind that we are running rootless and that the user inside of the container is the same as the outside. Additionally, because we run as the “user,” that container has direct access to the system, giving the same performance as if you were on bare metal. To build on this, we recently introduced Singularity Enterprise as a customer-hosted offering, making it faster and easier for businesses to adopt containerization across their production environments.
Q: How can customers expect to be more competitive with HPC when using your solution?
One of Singularity’s key benefits is our security that is like no other. For customers who value their intellectual property, this is a clear win. To have the ability to cryptographically sign containers is a must, and we are currently developing a way to encrypt the entire container. Both of these elements help guarantee prying eyes from accessing your IP or other data. The on-premise Container Library is critical for auditability, being able to produce the exact container used for a workload, and knowing the application stack from the core OS to all the specific libraries are exactly the same as when you started it. Also, as HPC has very particular applications that are harder to compile and finicky with the library stack, we helped the user reduce their time-to-test and time-to-science.
Q: What cloud(s) and architectures do you support?
We’re entirely cloud agnostic and once your application is containerized, we make it really easy to pull that container into a different cloud environment and start running it. Gone are the days of trying to ensure that cloud environments match your on-prem cluster when you know everything is already built in the container and matches exactly how you want it to be. We also found a seamless way to migrate and test various cloud offerings. Users can pick and choose their own offering with the confidence that it’s vetted and tested to run in a different environment.
As for architectures, we have users running Singularity on a variety of hardware including anything from ARM to POWER chips. The Remote Builder on Singularity also helps users build for these architectures. For example, if I’m running on my laptop but don’t have access to an ARM chip, I can issue a remote build to handle it for me and it will build on the target architecture.
Q: Tell us a cool story about a user/organization that did something amazing with your product.
To some degree, we see this every day in our interactions with users. Singularity fits so many use cases. We frequently talk with users who say: “You are exactly what we need.” One interesting use case I heard recently is a facility that had a critical service being ran on a machine under someone’s desk, long after that person left the organization. As would have it, it finally died one day and took down some critical applications with it. They were able to containerize the disk image and get it running in relatively short order.
Q: What advice would you give to someone migrating to the cloud?
I would recommend considering application portability. A key benefit is that it allows you to drag everything along, test it and know that it is already vetted to help eliminate uncertainty with a new environment. Something like this works great for a machine learning workflow, where you can train the model where appropriate or in the latest GPU offers from a cloud provider and take the same container to your edge device.
Q: What best practices or lessons learned would you like to share?
Creating validity and confidence in your workflows is very important. Although they seem like small things, it really adds up and makes a difference to align your testing and encryption when migrating to the cloud. Singularity can help with this, and we’ve written an easy to follow blog post about it.
Q: What shift and changes to do you see HPC in the next five years?
In five years, I think the conversations around testing the cloud will continue as it is a hot topic for today’s users. I also hope to see a way to minimize testing through the use of machine learning.
About the Author
Randy Herban is an Engineer at Sylabs. Prior to joining the team at Sylabs, he worked at Microsoft as a Support Engineer and Cycle Computing as an Operations Engineer. Randy has a Bachelor of Science in Computer Science from Indiana University and resides in Mishawaka, Indiana. You can find him on LinkedIn here.