Containerization is motivated by many factors: modern applications are often developed to be “container first”, microservices are increasingly being deployed, and hybrid cloud deployments demand portable workloads.
As containers see wider adoption at the edge, programmatically managing containers and container clusters will only become more important. SC//HyperCore™ can be used to run containers for your DevOps teams.
Improve how users can run containerized workloads with automated installation of the operating system, container runtime, and workload containers.
Cloud-init customization via REST-APIs enables infrastructure-as-code, so developers and administrators can automate otherwise very manual processes.
All Kubernetes deployments (and any container deployments) make use of our redundant storage and compute architecture to ensure application uptime.
Follow our heroes to the edge of disaster in this epic second issue of the EdgeSlayer Chronicles as a software developer uncovers the sinister plot and realizes that only containers and a united force can combat this digital menace.
Container deployments often involve many containers working in conjunction to support modern applications, such as in the case of microservices. As a result, many containers need to be instantiated and configured to fit their specific purposes. Loosely-coupled architectures require rapid, easy, and consistent container deployment. It simply wouldn’t be effective to manually create and configure containers due to time constraints and the risk of human error. SC//HyperCore’s cloud-init customization with REST-APIs radically improves a DevOps team’s ability to programmatically deploy consistent containers across sites while reducing required manual intervention.
Facilitate complete infrastructure-as-code container deployment
Save time automating manual steps in site and application setup
Reduce human error from manual setup
Ensure deployment consistency across sites
Enable consistent change control and more reliable updates through standardization
Implementing high availability with containers is one of the biggest challenges inherent to any container deployment. It is usually solved through a tool like Kubernetes (k8s), where the user implements redundancy at the container-cluster level. i.e. automating the creation of new k8s pods to replace those that have failed (possibly due to a k8s node loss) and thus bringing those services back online. This is all well and good, but k8s extensions are needed to provide these k8s clusters and nodes with persistent storage for stateful applications.
If k8s isn’t required outside providing application-level redundancy, forgoing the Kubernetes cluster in favor of running containers within VMs on SC//HyperCore radically simplifies providing redundancy. After all, a VM with one (or several) containers will be highly available, just like every other VM running on SC//HyperCore. In this scenario, both compute and data storage redundancy are provided at the infrastructure level through our existing, easy-to-use technology.
Recent investments in REST-APIs radically improve the speed and ease of mass-container deployment on SC//HyperCore. As such, organizations can save management overhead by writing scripts to handle daily management tasks needed at scale, particularly when they are deploying containers. Perhaps the single most important REST-API endpoint for deploying containers on SC//HyperCore is CloudInitData.
Cloud-init is a powerful open-source technology that allows users to automatically configure VMs on their first boot. Specifically, cloud-init is a package available to most Linux distributions that allows users to take a cloud image (essentially a vanilla Linux image equipped with cloud-init) and provide both the VM with meta and user data via a script during the initial boot. The configurations available are quite powerful. Users can set up SSH keys, create users, write files, and even run entire commands automatically.
When choosing which runtime to utilize, customers often turn to Docker because they are seen as the de-facto standard and offer a large variety of features and support levels. Fortunately, running Docker containers on SC//HyperCore is quick and easy. For those modest container deployments that involve few containers and don’t require cluster orchestration, running Docker within VMs on SC//HyperCore provides high availability without setting up application-level redundancy. Docker containers will benefit from the self-healing attributes of any VM running on SC//HyperCore. The live cycle of both these VMs and Docker containers can be programmatically managed via our REST-APIs. Going further, cloud-init can even run Docker commands immediately after VM boot to automatically create containers without any intervention.
Docker can be deployed within a VM to spin up the container(s)
Support apps that need to run in a container
Only requires a few containers
Management overhead isn’t a burden
Full infrastructure-as-code is possible
One single script for VM to configure container workload as needed
Integrations are available with a wide range of optional Container Management Systems:
IBM Edge Application Manager
Google Anthos
Azure ARC for Kubernetes
SUSE Rancher Kubernetes Cluster Management Platform
Red Hat Openshift Cloud
AWS IoT Greengrass
Avassa Control Tower
Portainer Container Management
Balena Cloud
We developed this Edge Computing Self-Assessment tool to help you think through the unique needs of your organization. While no assessment provides an exact formula, the personalized data our new tool offers can help you identify and explore your needs and preferences.
General
General
General
General
General
General
General
General
General
General
General
General
General
General
General
We are proud to have over more than a thousand positive customer reviews on the most trusted third-party industry reviews websites. Read what our customers have to say on Gartner Peer Insights, G2, and TrustRadius.