/tag/kubernetes
OmegaSync
The Computing Hardware Research Lab (CHRL) worked with the DAC to develop a pipeline connecting a web app to HPC resources for solving computationally hard combinatorial optimization problems such as computing the MaxCut in complex graphs using OmegaSync. The DAC created an RShiny app running on Kubernetes that collects user information and graph files. The app formats data, saves it to the HPC filesystem, and automates job submissions. It also triggers email notifications to users upon job start and completion, providing the results they need. This project highlights the DAC’s role in supporting faculty with complex research workflows.
PI: Nikhil Shukla, PhD (Department of Electrical and Computer Engineering)
Computing Environments at UVA
Research Computing (UVA-RC) serves as the principal center for computational resources and associated expertise at the University of Virginia (UVA). Each year UVA-RC provides services to over 433 active PIs that sponsor more than 2463 unique users from 14 different schools/organizations at the University, maintaining a breadth of systems to support the computational and data intensive research of UVA’s researchers.
High Performance Computing Standard Security Zone UVA-RC’s High Performance Computing (HPC) systems are designed with high-speed networks, high performance storage, GPUs, and large amounts of memory in order to support modern compute and memory intensive programs. UVA-RC operates two HPC systems within the standard security zone, Rivanna and Afton.
Microservice Deployments
Kubernetes is a container orchestrator for both short-running (such as workflow/pipeline stages) jobs and long-running (such as web and database servers) services. Containerized applications running in the UVARC Kubernetes cluster are visible to UVA Research networks (and therefore from Rivanna, Afton, Skyline, etc.). Web applications can be made visible to the UVA campus or the public Internet. Kubernetes Research Computing runs microservices in a Kubernetes cluster that automates the deployment of many containers, making their
management easy and scalable. This cluster will eventually consist of several dozen instances, >2000 cores and >2TB of memory allocated to
running containerized services. It will also have over 300TB of cluster storage and can attach to both project and
Container Services
– Container-based architecture, also known as “microservices,” is an approach to designing and running applications as a distributed set of components or layers. Such applications are typically run within containers, made popular in the last few years by Docker. Containers are portable, efficient, reusable, and contain code and any dependencies in a single package. Containerized services typically run a single process, rather than an entire stack within the same environment. This allows developers to replace, scale, or troubleshoot portions of their entire application at a time. General Availability (GA) of Kubernetes - Research Computing now manages microservice orchestration with Kubernetes, the open-source tool from Google.