/tag/rivanna

  • Loading Module in Jupyter

    Users cannot load modules inside a JupyterLab session. If you need access to modules, please request a desktop session instead of JupyterLab. Fill out the form as you normally would for JupyterLab. After you get to a desktop, open a terminal (next to Firefox in the top bar) and type these commands:
    module load jupyterlab module load … # your modules here jupyter-lab This should start up Firefox shortly. If you accidentally close the window, right-click on the link in the terminal and choose “open link” to restart.
    An example of using LaTeX inside a JupyterLab session is shown in the screenshot below.

  • High-Security Standard Storage Maintenance: Oct 15, 2024

    The Ivy Virtual Machines (VMs) and high security zone HPC system will be down for storage maintenance on Tuesday, Oct 15, 2024, beginning at 6 a.m. The system is expected to return to full service by 6 a.m. on Wednesday, Oct 16.. IMPORTANT MAINTENANCE NOTES During the maintenance all VMs will be down as well as the UVA Ivy Data Transfer Node (DTN) and Globus services. The High-Security HPC cluster will also be unavailable for all job scheduling and viewing.
    If you have any questions about the upcoming Ivy system maintenance, you may contact our user services team.
    Ivy Central Storage transition to HSZ Research Standard To transition from old storage hardware, we have retired the Ivy Central Storage and replaced it with the new High Security Zone Research Standard storage.

  • HPC Maintenance: Oct 15, 2024

    The HPC cluster will be down for maintenance on Tuesday, Oct 15, 2024 beginning at 6 am. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until the cluster is returned to service.
    All systems are expected to return to service by Wednesday, Oct 16 at 6 am.
    IMPORTANT MAINTENANCE NOTES Expansion of /home To transition away from the Qumulo filesystem, we will migrate all /home directories to the GPFS filesystem and automatically increase each user’s /home directory limit to 200GB.

  • Thermal properties of materials from first-principles

    Prof. Esfarjani’s group is using the HPC cluster to develop the Anharmonic LAttice DYNamics (ALADYN) software suite to calculate thermal transport properties and phase transitions from first-principles. The codes can extract force constants, solve the Boltzmann transport equation, predict thermal equilibrium based on the self-consistent phonon theory, and run molecular dynamics simulations within an anharmonic force field. The figure shows the phonon density of states and dispersion curve of Ge obtained from ALADYN.
    PI: Keivan Esfarjani, PhD (Department of Materials Science & Engineering)

  • Research Computing Open House 2024

    UPDATE: The Research Computing Open House was held on a blustery, rainy day, but the spirits of the staff and attendees were not dampened. Turnout was above expectations despite the wet weather. Attendees enjoyed the buffet and their interactions with RC staff.
    The winners of the random-drawing prizes were
    Maria Luana Morais, SOM Matt Panzer, SEAS Artun Duransoy, SEAS Please join us at the Research Computing Open House on Tuesday, September 17, 2024, from 2-5 p.m. in the Commonwealth Room at Newcomb Hall. We are excited to host the UVA community to share updates on a new supercomputer and services that we are offering.

  • Slurm Script Generator


  • Reinstatement of file purging of personal /scratch files on Afton and Rivanna

    On Sep 1, 2024 RC system engineers will reinstate a file purging policy for personal /scratch folders on the Afton and Rivanna high-performance computing (HPC) systems. From Sep 1 forward, scratch files that have not been accessed for over 90 days will be permanently deleted on a daily rolling basis. This is not a new policy; it is a reactivation of an established policy that follows general HPC best practices.
    The /scratch filesystem is intended as a temporary work directory. It is not backed up and old files must be removed periodically to maintain a stable HPC environment.
    Key Points: Purging of personal scratch files will start on Sep 1, 2024.

  • Rivanna Maintenance: May 28, 2024

    Rivanna will be down for maintenance on Tuesday, May 28, 2024 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service. While drive mapping and project storage will be unavailable, other storage will remain accessible through Globus.
    All systems are expected to return to service by Thursday, May 30 at 6 a.m.
    IMPORTANT MAINTENANCE NOTES Hardware and partition changes afton: We are pleased to announce the addition of 300 nodes, 96 cores each, based on the AMD EPYC 9454 architecture.

  • Rivanna Maintenance Schedule for 2024

    Rivanna will be taken down for maintenance in 2024 on the following days:
    Tuesday, February 6 Tuesday & Wednesday, May 28 & 29 Tuesday, July 2 Tuesday, October 15 Thursday, December 12 Please plan accordingly. Questions about the 2024 maintenance schedule should be directed to our user services team.

  • Clear OOD Files

    To clear OOD Session files, the HPC system will need to be accessed via a terminal. See documentation for information on how to access via SSH.
    You can find the session files and logs for all Open on Demand apps at:
    ~/ondemand/data/sys/dashboard/batch_connect/sys Under this directory you will see subdirectories for the Open on Demand applications that you have used before. Under each subdirectory you can find the files that are created when you launch a new session.
    To quickly clear all session files for OnDemand from your /home directory run:
    rm -rf ondemand Other directories related to Open on Demand such as .

  • Rivanna Maintenance: February 6, 2024

    Rivanna, Research Project storage, and Research Standard storage will be down for maintenance on Tuesday, February 6 beginning at 6 a.m. You may continue to submit jobs to Rivanna until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service. All systems are expected to return to service by 6 a.m. on Wednesday, February 7.
    UVA’s Facilities Management (FM) group will be updating the data center power grid during the maintenance period. This work is expected to be completed by 6 a.m. on 2/7.

  • Using UVA’s High-Performance Computing Systems

    Afton is the University of Virginia’s newest High-Performance Computing system. The Afton supercomputer is comprised of 300 compute node each with 96 compute cores based on the AMD EPYC 9454 architecture for a total of 28,800 cores. The increase in core count is augmented by a significant increase in memory per node compared to Rivanna. Each Afton node boasts a minimum of 750 Gigabytes of memory, with some supporting up to 1.5 Terabytes of RAM memory. The large amount of memory per node allows researchers to efficiently work with the ever-expanding datasets we are seeing across diverse research disciplines. The Afton and Rivanna systems provide access to 55 nodes with NVIDIA general purpose GPU accelerators (RTX2080, RTX3090, A6000, V100, A40, and A100), including an NVIDIA BasePOD.
  • Apptainer and UVA HPC

    Introduction Apptainer is a continuation of the Singularity project (see here). On December 18, 2023 we migrated from Singularity to Apptainer.
    Containers created by Singularity and Apptainer are mutually compatible as of this writing, although divergence is to be expected.
    One advantage of Apptainer is that users can now build container images natively on the UVA HPC system.
    Apptainer and UVA HPC (after 12/18/2023) Apptainer is available as a module. The RC staff has also curated a library of pre-prepared Apptainer container images for popular applications as part of the shared software stack. Descriptions for these shared containers can be found via the module avail and module spider commands.

  • Compilers and UVA HPC

    UVA HPC offers multiple compiler bundles for C, C++, and Fortran. Different compilers have different strengths and weaknesses and different error messaging and debugging features, so users should be willing to try another one when appropriate. The modules system manages the compiler environment and ensures that only compatible libraries are available for loading.
    Many users of compiled languages are working with codes that can employ MPI for multinode parallel runs. MPI users should first understand how their chosen compiler works, then see the MPI instructions at our parallel programming page.
    Compiled languages can be more difficult to debug, and the assistance of a good debugger can be essential.

  • Intel and UVA HPC

    Description Intel C and C++ compilers
    Software Category: compiler
    For detailed information, visit the Intel
    website.
    Available Versions To find the available versions and learn how to load them, run:
    module spider intel The output of the command shows the available Intel
    module versions.
    For detailed information about a particular Intel
    module, including how to load the module, run the module spider command with the module’s full version label. For example:
    module spider intel/18.0 ModuleVersion Module Load Command intel18.0 module load intel/18.0 intel2023.1 module load intel/2023.1 intel2024.0 module load intel/2024.0 The 2024.

  • Machine Learning and UVA HPC

    Overview Many machine learning packages can utilize general purpose graphics processing units (GPGPUs). If supported by the respective machine learning framework or application, code execution can be manyfold, often orders of magnitude, faster on GPU nodes compared to nodes without GPU devices.
    The HPC system has several nodes that are equipped with GPU devices. These nodes are available in the GPU partition. Access to a GPU node and its GPU device(s) requires specific Slurm directives or command line options as described in the Jobs using a GPU Node section.
    Applications Several machine learning software packages are installed on the UVA HPC system.

  • Software Containers

    [Deprecated] On Dec 18, 2023 Singularity has been upgraded to Apptainer, a continuation of the Singularity project. Overview Singularity is a container application targeted to multi-user, high-performance computing systems. It interoperates well with Slurm and with the Lmod modules system. Singularity can be used to create and run its own containers, or it can import Docker containers.
    Creating Singularity Containers To create your own image from scratch, you must have root privileges on some computer running Linux (any version). Follow the instructions at the Singularity site. If you have only Mac or Windows, you can use the Vagrant environment. Vagrant is a pre-packed system that runs under several virtual-machine environments, including the free Virtualbox environment.

  • TensorFlow and UVA HPC

    Overview TensorFlow is an open source software library for high performance numerical computation. It has become a very popular tool for machine learning and in particular for the creation of deep neural networks. The latest TensorFlow versions are now provided as prebuilt Apptainer containers on the HPC system. The basic concept of running Apptainer containers on the HPC system is described here.
    TensorFlow code is provided in two flavors, either with or without support of general purpose graphics processing units (GPUs). All TensorFlow container images provided on the HPC system require access to a GPU node. Access to GPU nodes is detailed in the sections below.

  • RC's Data Analytics Center (DAC): Now Serving UVA's Research Community

    The Data Analytics Center is UVA’s new hub for the management and analysis of your large research data. Need help with your computational research? DAC staff specialize in key domain areas such as image processing, text analysis, bioinformatics, computational chemistry and physics, neural networks, and more. And because the DAC team is located within Research Computing, they can assist in getting your workflows running on the University’s high-performance cluster or secure data system. They can answer your basic computational questions or, through funded engagements, be embedded in your projects.
    Big data doesn’t have to be a big deal. Learn how DAC can assist with your computational research – schedule an initial consultation with one of their data analysts by submitting a consultation request.

  • Rivanna Maintenance: December 18, 2023

    Rivanna will be down for maintenance on Monday, December 18, 2023 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
    All systems are expected to return to service by 6 a.m. on Tuesday, December 19.
    IMPORTANT MAINTENANCE NOTES The operating system will be upgraded to Rocky 8.7 with system glibc 2.28 and GCC 8.5.0. Due to fundamental changes in system libraries, the entire software stack is rebuilt. Users should rebuild all self-compiled codes and R packages.

  • Rivanna Maintenance: October 3, 2023

    Rivanna will be down for maintenance on Tuesday, October 3, 2023 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service. All systems are expected to return to service by 6 a.m. on Wednesday, October 4.
    IMPORTANT MAINTENANCE NOTES New largemem nodes RC engineers will be adding 36 nodes, each with 40 cores and 750 GB total memory, to the largemem partition on Rivanna. Jobs that need more memory than 9 GB per core should be submitted to the largemem partition rather than the standard partition.

  • Instructional Use of High Performance Computing

    Instructors can request instructional allocations on Rivanna and Afton for classes and extended workshops. These allocations are time-limited and generally allow access to a restricted set of nodes and only one special Slurm partition, but are otherwise equivalent to any allocation.
    Resource Availability Hardware and Partition Instructional allocations may use interactive partition. The instructional allocation is 100,000 SUs for the semester during which the course is conducted. For workshops, the allocation will persist during the workshop and for two days afterwards. RC offers several low-cost storage options to researchers, including 10TB of Research Standard storage for each eligible PI at no charge.

  • Allocations

    function setCookie(key, value, expiry) { var expires = new Date(); expires.setTime(expires.getTime() + (expiry * 60 * 60 * 1000)); document.cookie = key + ‘=’ + value + ‘;expires=’ + expires.toUTCString() + ‘;path=/'; } function getCookie(key) { var keyValue = document.cookie.match('(^|;) ?’ + key + ‘=([^;]*)(;|$)'); return keyValue ? keyValue[2] : null; } function gen_random(length) { var text = “"; var possible = “ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789”; for (var i = 0; i var user_token = getCookie("__user_token”); Time on Rivanna/Afton is allocated as Service Units (SUs). One SU corresponds to one core-hour. Multiple SUs make up what is called an allocation (e.
  • Virginia Women in HPC Events in September & October

    VA-WHPC September Event - Leadership Journeys Time: Sep 19, 2023, 01:00 PM EST (US and Canada).
    Join us for our next community event featuring Dr. Neena Imam as she shares her personal view of challenges and successes experienced throughout her inspiring leadership journey in research, HPC and AI computing. Come learn about career strategies, ask questions, and contribute to our discussion of how the playing field may be leveled to offer equitable IT & HPC leadership opportunities for women and minorities.
    Dr. Imam earned a PhD in Electrical Engineering and has been engaged in research and computing in a variety of roles.

  • Rivanna Maintenance Schedule for 2023

    Rivanna will be taken down for maintenance in 2023 on the following days:
    Tuesday, March 7 Tuesday, May 30 Tuesday, October 3 Monday, December 18 Please plan accordingly. Questions about the 2023 maintenance schedule should be directed to our user services team.

  • New Scratch System on Rivanna: July 18, 2023

    During the July 18th maintenance, RC engineers installed a new /scratch file storage system on Rivanna. We have created sample scripts and instructions to help you transfer your files from the previous file system to the new one.(Expand the link below for details.) The previous scratch filesystem, now called /oldscratch, will be permanently retired on October 17, 2023 and all the data it contains will be deleted.
    Users should clean up their /oldscratch directory in preparation, to minimize the load. A sample script is posted below.
    Modified queue limits have been implemented to provide maximum read/write performance of the new /scratch filesystem.

  • Rivanna Maintenance: July 18, 2023

    Rivanna will be down for maintenance on July 18, 2023 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
    All systems are expected to return to service by 6 a.m. on Wednesday, July 19.
    IMPORTANT MAINTENANCE NOTES New scratch RC engineers will be installing a new /scratch storage filesystem that can be accessed at /scratch/$USER after the end of maintenance.
    Modified queue limits will be implemented to provide maximum read/write performance of the new /scratch filesystem.

  • NVIDIA DGX BasePOD™

    Introducing the NVIDIA DGX BasePOD™ As artificial intelligence (AI) and machine learning (ML) continue to change how academic research is conducted, the NVIDIA DGX BasePOD, or BasePOD, brings new AI and ML functionality UVA’s High-Performance Computing (HPC) system. The BasePOD is a cluster of high-performance GPUs that allows large deep-learning models to be created and utilized at UVA.
    The NVIDIA DGX BasePOD™ on Rivanna and Afton, hereafter referred to as the POD, is comprised of:
    10 DGX A100 nodes with 2TB of RAM memory per node 80 GB GPU memory per GPU device Compared to the regular GPU nodes, the POD contains advanced features such as:

  • Software Containers

    Overview Containers bundle an application, the libraries and other executables it may need, and even the data used with the application into portable, self-contained files called images. Containers simplify installation and management of software with complex dependencies and can also be used to package workflows.
    Please refer to the following pages for further information.
    Singularity (before Dec 18, 2023) Apptainer (after Dec 18, 2023) Short course: Software Containers for HPC Container Registries for UVA Research Computing Images built by Research Computing are hosted on Docker Hub (and previously Singularity Library).
    Singularity Library Due to storage limits we can no longer add Singularity images to Singularity Library.

  • Rivanna Maintenance: May 30, 2023

    Rivanna will be down for maintenance on May 30, 2023 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
    All systems are expected to return to service by 6 a.m. on Wednesday, May 31.
    IMPORTANT MAINTENANCE NOTES Five RTX3090 nodes (4 GPU devices each) have been added to the gpu partition - use –gres=gpu:rtx3090 in Slurm script.
    Modules The toolchains gompic gcccuda goolfc will be removed from Rivanna during the maintenance period, since we now have CUDA-aware toolchains based on gcc/11.

  • GPU-enabled Software and UVA HPC

    Please note that certain modules can only run on specific GPU types. This will be displayed in a message upon loading the module.
    Certain software applications may also able to take advantage of the advanced capabilities provided by the NVIDIA DGX BasePOD™.
    Learn More function searchFunction() { var input, filter, table, tr, td, i, txtValue; input = document.getElementById(“searchInput”); filter = input.value.toUpperCase(); table = document.getElementById(“moduleTable”); tr = table.getElementsByTagName(“tr”); for (i = 0; i -1) { tr[i].style.display = “"; } else { tr[i].style.display = “none”; } } } } Module Category Description alphafold bio gpu Open source code for AlphaFold amber chem gpu Amber (originally Assisted Model Building with Energy Refinement) is software for performing molecular dynamics and structure prediction.

  • Virginia Women in HPC - Student Lightning Talks

    What: Join us in welcoming 11 undergraduate and graduate students from across Virginia to talk about their research. The talks will be lightning style format allowing 3 minutes for students to present and 1-2 questions and 1-2 questions from the audience. Don’t miss out on this fantastic opportunity to hear about a variety of research topics within HPC!
    Event Time: April 4, 2023, 01:00 PM EST (US and Canada).
    Register now   – Featured Speakers: Lakshmi Miller - Graduate Student
    Aerospace and Ocean Engineering, Virginia Tech
    “CFD Informed Maneuvering of AUVs”
    Rashmi Chawla - Graduate Student

  • Workshops

    UVA Research Computing provides training opportunities covering a variety of data analysis, basic programming and computational topics. All of the classes listed below are taught by experts and are freely available to UVa faculty, staff and students.
    New to High-Performance Computing? We offer orientation sessions to introduce you to the Afton & Rivanna HPC systems on Wednesdays (appointment required).
    – Wednesdays 3:00-4:00pm Sign up for an “Intro to HPC” session Upcoming Workshops DATE WORKSHOP INSTRUCTOR There are currently no training events scheduled. Please check back soon! Research Computing is partnering with the Research Library and the Health Sciences Library to deliver workshops covering a variety of research computing topics.

  • Globus Data Transfer

    Globus is a simple, reliable, and fast way to access and move your research data between systems. Globus allows you to transfer data to and from systems such as:
    Laptops & personal workstations Rivanna/Afton HPC clusters High-Security Research Standard Storage Lab / departmental storage Tape archives Cloud storage Off-campus resources (ACCESS, National Labs) Globus can help you share research data with colleagues and co-investigators, or to move data back and forth between a lab workstation and Rivanna/Afton or your personal computer.
    Are your data stored at a different institution? At a supercomputing facility? All you need is your institution’s login credentials.

  • Rivanna Maintenance: December 19, 2022

    Rivanna will be down for maintenance on December 19, 2022 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service. Users will not be able to access the Globus data transfer node during the maintenance period.
    All systems are expected to return to service by 6 a.m. on Tuesday, December 20. Globus users may need to rebuild their shared collections.
    IMPORTANT MAINTENANCE NOTES Two new toolchains are now available: goolf/11.2.0_4.1.4 and intel/2022.

  • Scratch Recovery with Globus

    Globus is a simple, reliable, and fast way to access and move your research data between systems. Researchers can transfer data from their old scratch directories to new scratch via a single web interface - no software download or installation is necessary.
  • Virginia Women in HPC - Women in HPC & IT Leadership Roles

    Topic: Women in HPC & IT Leadership Roles.
    When: October 12, 2022 01:00 PM, Eastern Time (US and Canada).
    Join us for our Fall community meeting to hear from female leaders in the HPC & IT field sharing challenges and successes experienced throughout their careers. Don’t miss this fantastic opportunity to learn about career strategies, share your experience, and contribute to our discussion of how the playing field may be leveled to offer equitable HPC & IT leadership opportunities for women and minorities. Attendees are invited to share their own experiences and engage with panelists during this interactive Q&A session.

  • `ssh` on UVA HPC

    The secure shell ssh is the primary application used to access the HPC system from the command line.
    Connecting to a Remote Host For Windows, MobaXterm is our recommended ssh client; this package also provides an SFTP client and an X11 server in one bundle.
    Mac OSX and Linux users access the cluster from a terminal through OpenSSH, which are preinstalled on these operating systems. Open a terminal (on OSX, the Terminal application) and type
    ssh -Y mst3k@login.hpc.virginia.edu where mst3k should be replaced by your user ID. You will generally need to use this format unless you set up your user account on your Mac or Linux system with your UVA ID.

  • How to add packages to a container?

    Basic Steps Strictly speaking, you cannot add packages to an existing container since it is not editable. However, you can try to install missing packages locally. Using python-pip as an example:
    module load apptainer apptainer exec <container.sif> python -m pip install –user <package> Replace <container.sif> with the actual filename of the container and <package> with the package name. The Python package will be installed in your home directory under .local/lib/pythonX.Y where X.Y is the Python version in the container.
    If the installation results in a binary, it will often be placed in .local/bin. Remember to add this to your PATH:

  • KNL Nodes and Partition to be Removed from Rivanna on June 30, 2022

    Rivanna has included Intel KNL (Knight’s Landing) nodes for several
    years. This was a unique architecture, not well suited for general
    use, and the manufacturer has stopped producing or supporting this type
    of hardware. As a result, the KNL nodes will be removed from Rivanna
    on June 30, 2022.
    Rivanna System Details

  • Rivanna Storage

    There are a variety of options for storing large-scale research data at UVa. Public and Internal Use data storage systems can be accessed from the Rivanna and Afton high performance computing systems.
    .tg {border-collapse:collapse;border-spacing:0;border-color:#ccc;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#fff;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#f0f0f0;} .tg .tg-hy9w{background-color:#eceeef;border-color:inherit;vertical-align:top} .tg .tg-dc35{background-color:#f9f9f9;border-color:inherit;vertical-align:top} .tg .tg-0qmj{font-weight:bold;background-color:#eceeef;border-color:inherit;vertical-align:top} Storage Directories Name Quota Price Data Protection Accessible from Best Practices /home 200GB Free 1-week snapshots Rivanna/Afton /home is best used as a working directory when using Rivanna/Afton interactively. Slurm jobs run against /home will be slower than those run against /scratch. The /home directory is a personal storage space that is not shareable with other users.

  • Research Standard Storage

    Overview The Research Standard Storage file system provides users with a solution for research data storage and sharing. Public, internal use, and sensitive research data can be stored in Research Standard storage, and UVA Information Security provides details about data sensitivity classifications. Members in the same group have access to a shared directory created by the team lead or PI. Group membership can be defined and managed through Grouper (requires VPN connection). Research Standard storage is mounted on the HPC cluster and can also be accessed on a personal computer with an SMB mount, allowing for point-and-click file manipulation.
    As of July, 2024, Each PI with a Research Computing account will have up to 10 TB of Research Standard Storage at no charge.

  • Rivanna Maintenance: May 17, 2022

    Rivanna will be down for maintenance on May 17, 2022 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service. Users will not be able to access the Globus data transfer node (UVA Main-DTN) or Research Project storage during the maintenance period. All systems are expected to return to service by 6 a.m. on Wednesday, May 18.
    IMPORTANT MAINTENANCE NOTES The operating system will be upgraded from CentOS 7.8 to 7.

  • Virginia Women in HPC - Panel Discussion, April 27

    Please join us for a lively panel discussion of the high-performance computing infrastructure and support resources available to researchers in the Commonwealth of Virginia. Panelists from Virginia’s top research universities will provide an overview of the user base at their home institutions and discuss strategies for helping researchers make better use of existing HPC resources. Attendees are encouraged to share their own experiences and engage with panelists during this interactive Q&A session.
    *Topic: High-performance Computing Resources in the Commonwealth of Virginia
    When: April 27, 2022 01:00 PM, Eastern Time (US and Canada)
    REGISTER NOW!   – Featured panelists: Matthew Brown (VT)

  • Cyberduck

    Cyberduck is a transfer tool for Windows and Mac. It supports a large number of transfer targets and protocols. Only SFTP can be used with Rivanna/Afton. The free version will pop up donation requests.
    Download Download Cyberduck
    Connecting to the HPC System and File Transfer Launch Cyberduck. After launching Cyberduck, the user interface will open. To initiate a connection to UVA HPC, click the Open Connection button.
    Enter Your Credentials. From the drop-down menu, select SFTP (SSH File Transfer Protocol). Then enter the appropriate information in the following fields:
    Host: login.hpc.virginia.edu Username: your computing ID Password: your UVA HPC password Port: 22 When completed, click Connect.

  • Filezilla

    Filezilla is a cross-platform data transfer tool. The free version supports FTP, FTPS, and SFTP. Only SFTP can be used with UVA HPC.
    Download Download Filezilla
    Connecting to the HPC System and File Transfer Launch FileZilla. After launching FileZilla, the user interface will open. In the left panel, you should see your local file system and files listed in the left side panels. You will enter your login credentials in the fields highlighted in the figure below.
    Enter Your Credentials. Fill in the Host, Username, Password, and Port fields.
    Host: login.hpc.virginia.edu Username: your computing ID Password: your Eservices password Port: 22 When completed, click Quickconnect.

  • Virginia Women in HPC - Research Highlights Event, Jan. 25

    We are proud to announce the founding of Virginia’s first Women in High-Performance Computing (VA-WHPC) program. Join us for our first event of 2022: Female research leaders of the Commonwealth sharing and discussing how HPC has facilitated their scientific research and professional careers.
    Topic: How does HPC help with your scientific research – Faculty perspectives, Part II
    When: Jan 25, 2022 01:00 PM, Eastern Time (US and Canada)
    REGISTER NOW!   – Our speakers: Anne Brown (VT) is an Assistant Professor of Biochemistry, Science Informatics Consultant and Health Analytics Coordinator at Virginia Tech. Her research interests include utilizing computational modeling to answer biological questions and aid in drug discovery and the application of computational molecular modeling to elucidate the relationship between structure, function, and dynamics of biomolecules.

  • Rivanna Maintenance: December 14, 2021

    Rivanna and the Globus data transfer nodes (DTNs) will be down for maintenance on Tuesday, December 14, 2021 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
    Users will be unable to transfer data using Globus during the maintenance period. Rivanna and the Globus DTNs are expected to return to service by 6 a.m. on Wednesday, December 15.
    IMPORTANT MAINTENANCE NOTES New GPU We are pleased to announce the addition of DGX A100 GPU to the gpu partition.

  • Maintenance Day: October 12, 2021

    The Globus data transfer nodes (DTNs) and Rivanna’s parallel nodes will be down for maintenance on Tuesday, October 12, 2021, between 8 a.m. and 4 p.m. Users will be unable to transfer data using Globus or run parallel jobs on Rivanna during this period.
    All other systems and services—including storage—are expected to continue operating normally.
    IMPORTANT MAINTENANCE NOTES The following Rivanna software changes will be implemented during the maintenance period:
    IDL/8.4- replaced by 8.8 (8.7.2 still available) ANSYS default version changing from 2021r1 to 2021r2

  • Breaking News: AlphaFold is now available on Rivanna!

    We are pleased to announce that AlphaFold is now available on Rivanna! Here are some of our user’s first few protein structure prediction calculations on Rivanna.

    Simple ZFc protein: Similar results for AlphaFold (brown) and I-TASSER (blue). (This figure was created with the RCSB Pairwise Structure Alignment tool.)

    146PduD-linker-ZFc protein: AlphaFold’s (left) superior ability to predict secondary structure, a β-sheet in its green-yellow region, whereas I-TASSER (right) is not sufficiently refined to feature any β-strands. (This figure was created with NGL Viewer.)
    (Credits: David Bass and Prof. Keith Kozminski, Department of Biology)
    FAQ What is AlphaFold?
    AlphaFold is an AI for protein structure prediction developed by Google DeepMind.


  • Converting a Jupyter Notebook to a Python Script

    Sometimes it may be useful to convert a Jupyter notebook into a Python executable script. Once your notebook is opened in OOD you can select File > Export Notebook As … > Export Notebook to Executable Script:
    This will download a Python executable with a ‘.py’ extension into your local computer’s Downloads folder. Your notebook may also show “Download as” instead of “Export Notebook As …”. Either of these selections will allow you to download a Python executable.
    This script can be copied to the HPC system in the working directory where JupyterLab was accessing the notebook. Information on transferring files to and from Rivanna can be found here.

  • Remaining maintenance days in 2021

    Rivanna will be down for maintenance on Tuesday, October 12 and Tuesday, December 14. Please plan accordingly. We do not anticipate any additional maintenance days in 2021.
  • ITS `home1` directories unavailable on Rivanna

    ITS has completed phase 1 of its network reconfiguration and ITS NAS storage volumes have been remounted on Rivanna. RC managed Research Standard and Research Project storage are also fully available. ITS home1 directories, including /nv/t* mounts, remain unavailable on Rivanna. Users will still be able to mount ITS home1 storage to their local workstations and transfer their data via Globus and the UVA Main DTN. We regret the inconvenience.
  • Rivanna Maintenance: June 15, 2021

    Rivanna will be down for maintenance on Tuesday, June 15, 2021, beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
    Rivanna is expected to return to service by 6 a.m. on Wednesday, June 16.
    IMPORTANT MAINTENANCE NOTES Globus Some Globus users may need to rebuild their shared connections after the maintenance period has ended. Users who require assistance with this task are invited to join us for office hours between 10 a.

  • Converting a Jupyter Notebook to a PDF

    Users cannot load modules inside the OpenOnDemand App for JupyterLab. Therefore it is not possible to convert a Jupyter Notebook to a PDF directly inside the JupyterLab Interactive App on OpenOnDemand.
    There are 2 ways to convert a Jupyter Notebook to a PDF:
    Directly from the command line. ssh from your terminal and type the following: module load anaconda/2020 texlive jupyter nbconvert –to pdf you_script.ipynb If you want to use GUI, please request a desktop session.
    Fill out the form as you normally would for JupyterLab. After you get to a desktop, open a terminal (black box next to Firefox in the top bar) and type these commands: module load anaconda/2020 texlive jupyter notebook This will pull up JupyterLab.

  • Computing Systems

    UVA Research Computing can help you find the right system for your computational workloads. From supercomputers to HIPAA secure systems to cloud-based deployments with advanced infrastructure, various systems are available to researchers.
    Facilities Statement - Are you submitting a grant proposal and need standard information about UVA research computing environments? Get it here. High Performance Computing - Rivanna and Afton A traditional high performance cluster with a resource manager, a large file system, modules, and MPI processing. Get Started with UVA HPC Secure Computing for Highly Sensitive Data - Ivy A multi-platform, HIPAA-compliant system for secure data that includes dedicated virtual machines (Linux and Windows), JupyterLab Notebooks, and Apache Spark.

  • Launching RStudio Server from an Apptainer Container

    Rocker provides many software containers for R. Due to the default permission settings of our file system, launching an RStudio Server session is not straightforward. If you are interested in using their containers on the HPC system, please follow these steps.
    Pull container Use Apptainer to pull the container. We will use geospatial in this example.
    module load apptainer apptainer pull docker://rocker/geospatial You should see geospatial_latest.sif in your current directory.
    One-time setup The commands in this section are to be executed as a one-time setup on the frontend. You may need to repeat the steps here when running a new rocker container.

  • Migrating Python packages

    Scenario You have installed Python packages locally in one version and now wish to use them in a different version. For example, you have been using Python 3.6 but it is obsolete and will be removed soon, so you need to set up those packages for Python 3.8. There are several ways to accomplish this, depending on the package manager. In this how-to we will discuss pip and conda.
    You will need to load the module for the newer Python version. For this example,
    module load anaconda/2020.11-py3.8 $('#copybtn741632985').click(function(){ var $temp = $(""); $(“body”).append($temp); $temp.val($('#741632985').text()).select(); document.

  • How-To Guides for UVA HPC Users

    Guides Building compiled code Using make Building and running MPI Code Bioinformatics on UVA HPC Clear OOD Session Files Convert Jupyter Notebook to PDF Convert Jupyter Notebook to Python Script Custom Jupyter kernels Loading Modules in Jupyter Docker images on UVA HPC Adding packages to a container Migrate Python packages Launch RStudio Server from an Apptainer container More Documentation Connecting Using SSH Using a browser Using FastX Jobs / Slurm / Queues Slurm Overview Queues Storage and File Transfer Storage overview Data transfer methods Allocations Allocations Overview
  • Pricing

    Below is a schedule of prices for Research Computing resources.
    High Performance Computing Allocations Type SU Limits Cost SU Expiration Standard None Free 12 months Purchased None $0.01 Never Instructional 100,000 Free 2 weeks after last training session A service unit (SU) resembles usage of a trackable hardware resource for a specified amount of time. In its simplest form 1 SU = 1 core hour, but the SU charge rate can vary based on the specific hardware used. Resources like GPUs and memory may incur additional SU charges.

  • Rivanna Maintenance: Phase 2 Completed

    Phase 2 of the maintenance period is complete. A majority of Rivanna’s nodes, including its parallel nodes, can now be accessed by users.
  • Rivanna Maintenance: Phase 1 Completed

    Phase 1 of the maintenance period is now complete. Rivanna’s standard, GPU, largemem, and front-end nodes have been returned to service. Pending jobs that were submitted before the maintenance began need to be resubmitted.
    Rivanna’s parallel nodes will be released in phase 2 which we anticipate being finished by 5 p.m. on Tuesday, December 22.

  • Rivanna Maintenance Update: 12/18/20, 02:30 pm EST

    In order to expedite users' access to the system, Rivanna will be returned to service in 2 separate phases:
    During phase 1, which is expected to be completed in fewer than 72 hours from now, the standard, GPU, largemem, and front-end nodes will be reactivated. Rivanna’s parallel nodes will be released in phase 2 which should be finished within the 72-hour timeframe.

  • Rivanna Maintenance Extended

    The maintenance period has been extended for another 72 hours minimum. We apologize for the inconvenience. Rivanna was expected to return to service on December 17, but ongoing electrical work in the data center and other obstacles have delayed our engineering team’s progress.
    We will send out an e-mail announcement as soon as the maintenance period ends. Updates will also be posted on our website.

  • Rivanna Maintenance: December 16-17, 2020

    You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.


  • Bioinformatics Resources and UVA HPC

    The UVA research community has access to numerous bioinformatics software installed directly or available through the bioconda Python modules.
    Click here for a comprehensive list of currently-installed bioinformatics software.
    Popular Bioinformatics Software Below are some popular tools and useful links for their documentation and usage:
    .tg {border-collapse:collapse;border-spacing:0;border-color:#ccc;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#fff;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#f0f0f0;} .tg .tg-hy9w{background-color:#eceeef;border-color:inherit;vertical-align:middle;} .tg .tg-dc35{background-color:#f9f9f9;border-color:inherit;vertical-align:middle;} .tg .tg-hy9w-nw{background-color:#eceeef;border-color:inherit;vertical-align:middle;white-space:nowrap;} .tg .tg-dc35-nw{background-color:#f9f9f9;border-color:inherit;vertical-align:middle;white-space:nowrap;} .tg .tg-0qmj{font-weight:bold;background-color:#eceeef;border-color:inherit;vertical-align:middle;} .scroll thead, .scroll tbody {display: block} .scroll tbody {overflow-y: auto; height: 500px;} .scroll thead tr:after {content: ‘';overflow-y: scroll; visibility: hidden; height: 0;} Tool Version Description Useful Links BEDTools 2.

  • Rivanna Maintenance: Sept 22, 2020

    Rivanna will be down for maintenance on Tuesday, September 22, beginning at 8:30 a.m. It is expected to return to service later in the day. RC engineers will be installing new hardware that is needed to stabilize the /scratch filesystem.
    You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
    If you have any questions or concerns about the maintenance period, please contact our user support team at hpc-support@virginia.edu.

  • Automated Image Labeling and Iterative Learning - Live Seminar: September 15, 2020

    MathWorks engineers will offer a free live webinar on September 15th from 2:00 to 3:00 Eastern time.
  • Deep Learning for Neuroscience - Live Seminar: September 22, 2020

    MathWorks engineers will offer a free live webinar on September 22nd from 2:00 to 3:30 Eastern time.
  • ACCORD: Jupyter Lab

    Back to Overview
    Jupyter Lab allows for interactive, notebook-based analysis of data. A good choice for pulling quick results or refining your code in numerous languages including Python, R, Julia, bash, and others.
    Learn more about Jupyter Lab

  • ACCORD: RStudio

    Back to Overview
    RStudio is the standard IDE for research using the R programming language.
    Learn more about RStudio

  • ACCORD: Theia IDE

    Back to Overview
    Theia Python is a rich IDE that allows researchers to manage their files and data, write code with an intelligent editor, and execute code within a terminal session.
    Learn more about the Theia Python IDE

  • R Updates: June 17, 2020

    During the June maintenance, we made changes to R which will affect how your R programs run on Rivanna. A brief description of the changes is as follows:
  • R Updates: June 17, 2020

    During the June maintenance, we will make changes to R which will affect how your R programs run on Rivanna. Below is a list of the changes and how they will affect your code. 1. The gcc-built versions of R will be updated to goolf-built versions. Instead of loading gcc before loading R, you will need to load goolf or gcc openmpi. For example: module load goolf R/4.0.0.
    Remember to update any Slurm scripts that have module load gcc R or module load gcc R/3.x.x.
    2. The locations of the R libraries will be updated. We are changing the locations of the R libraries (i.

  • Transitioning to New R Modules: June 17, 2020

    The recommended steps for transitioning your R programs after the June maintenance are as follows: Determine which version of R you will be using (e.g., R/3.6.3). Open a terminal window on the HPC system and load the version of R that you chose in step #1 (e.g., module load goolf R/3.6.3). (Optional) Run our script to rebuild your existing R library for the newer version of R. For example, if you had been using R/3.5.1 and are switching to R/3.6.3, type the following in the terminal window: updateRlib 3.5.1 . Make sure that you have loaded any other modules (e.
  • Rivanna Maintenance: June 17, 2020

    You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.


  • RC Acquires New Accounting Management Software for Rivanna

    Research Computing will be activating a new accounting management package for Rivanna on June 17, 2020. The software was purchased from Adaptive Computing, which specializes in advanced management applications for high-performance systems. Rivanna users can expect to see more accurate reporting on their Service Unit (SU) balances and burn rates. Information on usage by individual members of an allocation group will also be available.
    Commands such as allocations will remain but will reflect the new accounting. Users should be aware that the new accounting system implements “liens” on running jobs, and that the SUs requested for each job will be held in a reserved pool until the job completes.

  • High Performance Computing

    Research Computing supports all UVA researchers who are interested in writing code to address their scientific inquiries. Whether these programming tasks are implemented interactively, in a series of scripts or as an open-source software package, services are available to provide guidance and enable collaborative development. RC has specific expertise in object-oriented programming in Matlab, R, and Python.
    Examples of service areas include:
    Collaborating on package development Reviewing and debugging code Preparing scripts to automate or expedite tasks Developing web interfaces for interactive data exploration Advising on integration of existing software tools UVA has three local computational facilities available to researchers: Rivanna, Afton, and Ivy.

  • UVA HPC Software

    Overview Research Computing at UVA offers a variety of standard software packages for all UVA HPC users. We also install requested software based on the needs of the high-performance computing (HPC) community as a whole. Software used by a single group should be installed by that group’s members, ideally on leased storage controlled by the group. Departments with a set of widely-used software packages may install them to the lsp_apps space. The Research Computing group also provides limited assistance for individual installations.
    For help installing research software on your PC, please contact Research Software Support at res-consult@virginia.edu.
    Software Modules and Containers Software on the HPC system is accessed via environment modules or containers.

  • Building Your Code on the HPC System

    Building your Application Creating an executable from source with a compiled language requires two steps, compiling and linking. The combination of these is generally called building. The output of the compiler is generally an object file, which on Unix will end in a .o suffix. Object files are machine code and are not human-readable, but they are not standalone and cannot be executed. The linker, which is usually invoked through the compiler, takes all object files, along with any external libraries, and creates the executable (also called a binary).
    Compilers are invoked on source files with a line such as

  • Running a Bioinformatics Software Pipeline with Wdl/Cromwell

    WDL (pronounced widdle) is a workflow description language to define tasks and workflows. WDL aims to describe tasks with abstract commands that have inputs, and once defined, allows you to wire them together to form complex workflows.
    Learn More
    CROMWELL is the execution engine (written in Java) that supports running WDL scripts on three types of platforms: local machine (e.g. your laptop), a local cluster/compute farm accessed via a job scheduler (e.g. Slurm, GridEngine) or a cloud platform (e.g. Google Cloud or Amazon AWS).
    Learn More
    Introduction Pre-requisites: This tutorial assumes that you have an understanding of the basic structure of a WDL script.

  • SSH Keys

    Users can authenticate their SSH sessions using either a password or an ssh key. The instructions below describe how to create a key and use it for password-less authentication to your Linux instances.
    About SSH Keys SSH keys are a pair of encrypted files that are meant to go together. One half of the pair is called the “private” key, and the other half is the “public” key. When users use the private key to connect to a server that is configured with the public key, the match can be verified and the user is signed in. Or, put it more simply, when data is encrypted using one half of the key, it can be decrypted using the other half.

  • Rivanna Software Updates: March 11, 2020

    The Rivanna maintenance has been completed on March 11 and the system is back in service. The following software modules have been removed from Rivanna during the maintenance period. Please use the suggested newer versions:
    gcc/5.4.0 & toolchains -> 7.1.0 All modules that depend on gcc/5.4.0 are now available under gcc/7.1.0. The only exception is cushaw3/3.0.3. Please contact us if you need to use it. pgi/19.7 & toolchains -> 19.10 All modules that depend on pgi/19.7 are now available under pgi/19.10. anaconda/5.2.0-py2.7 -> 2019.10-py2.7 All modules that depend on anaconda/5.2.0-py2.7 are now available under anaconda/2019.

  • Debuggers and Profilers

    Debuggers To use a debugger, it is necessary to rebuild your code with the -g flag added. All object files must be removed anytime compiler flags change. If you have a Makefile run make clean if it is available. The program must then be run under the control of the debugger. For example, if you are using gdb, you run
    gdb ./myexec Adding debugging flags generally disables any optimization flags you may have added, and can slow down the code. Please remember to recompile with -g removed once you have found your bugs.
    Gnu Debugger (gdb) and Profiler (gprof) The Gnu Compiler Collection compilers are free, open-source tools.

  • Custom Jupyter Kernels

    You can create custom kernels from an Anaconda environment or an Apptainer container.
    In both cases you’ll need to install the ipykernel package.
    Jupyter kernel based on Anaconda environment To create a custom kernel of the Anaconda environment myenv that uses Python 3.7:
    module load anaconda conda create -n myenv python=3.7 ipykernel <other_packages> source activate myenv python -m ipykernel install –user –name myenv –display-name "My Env" Note:
    You can customize the display name for your kernel. It is shown when you hover over a tile in JupyterLab. If you do not specify a display name, the default Python [conda env:<ENV_NAME>] will be shown.

  • Rivanna Maintenance: March 11, 2020

    Rivanna will be down for maintenance on Wednesday, March 11, beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
    Rivanna is expected to return to service later in the day.
    The following software modules will be removed from Rivanna during the maintenance period (please use the suggested newer versions):
    gcc/5.4.0 & toolchains -> 7.1.0 All modules that depend on gcc/5.4.0 will be available under gcc/7.1.0. The only exception is cushaw3/3.

  • PyTorch and UVA HPC

    Description PyTorch is a deep learning framework that puts Python first. It provides Tensors and Dynamic neural networks in Python with strong GPU acceleration.
    Software Category: data
    For detailed information, visit the PyTorch
    website.
    Available Versions The current installation of PyTorch
    incorporates the most popular packages. To find the available versions and learn how to load them, run:
    module spider pytorch The output of the command shows the available PyTorch
    module versions.
    For detailed information about a particular PyTorch
    module, including how to load the module, run the module spider command with the module’s full version label. For example:
    module spider pytorch/1.

  • A Short MPI Tutorial

    Tutorials and books on MPI A helpful online tutorial is available from the Lawrence Livermore National Laboratory. The following books can be found in UVA libraries:
    Parallel Programming with MPI by Peter Pacheco. Using MPI : Portable Parallel Programming With the Message-Passing Interface by William Gropp, Ewing Lusk, and Anthony Skjellum. Using MPI-2: Advanced Features of the Message-Passing Interface by William Gropp, Ewing Lusk, and Rajeev Thakur. MPI: The Complete Reference : The MPI Core by Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra. MPI: The Complete Reference : The MPI-2 Extensions by William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, and Marc Snir.

  • Building and Running MPI Code

    Building an MPI Code All implementations provide wrappers around the underlying compilers that simplify compilation. As it is very important to use the headers that correspond to a given library, users are urged to make use of the wrappers whenever possible. For OpenMPI and MVAPICH2 these are:
    mpicc (C) mpicxx (C++) mpif90 (Fortran free or fixed format) For Intel MPI these use gcc/g++/gfortran by default, which is generally not recommended; to use the Intel compilers the corresponding wrappers are:
    mpiicc mpiicpc mpiifort Note: At this time, we recommend MPI users build with Intel 18.0 and IntelMPI 18.

  • Docker Images on the HPC System

    Docker requires sudo privilege and therefore it is not supported on the HPC system. To use a Docker image you will need to convert it into Apptainer.
    Convert a Docker image There are several ways to convert a Docker image:
    Download a remote image from Docker Hub Build from a local image cached in Docker daemon Build from a definition file (advanced) Instructions are provided in each of the following sections.
    Docker Hub Docker images hosted on Docker Hub can be downloaded and converted in one step via the apptainer pull command:
    module load apptainer apptainer pull docker://account/image Use the exact same command as you would for docker pull.

  • How To

    General General tips and tricks for computational research.
    General HowTos › Rivanna and Afton High Performance Computing platforms
    HPC HowTos › Ivy Secure Data Computing Platform
    Ivy HowTos › Storage Research Data Storage & Transfer
    Storage HowTos ›

  • Rivanna and Afton FAQs

    General Usage Allocations Research Software Job Management Storage Management Data Transfer Downloading Files Other Questions General Usage How do I gain access to Rivanna/Afton? A faculty member must first request an allocation on the HPC system. Full details can be found here.
    How do I log on to Rivanna/Afton? Use an SSH client from a campus-connected machine and connect to login.hpc.virginia.edu. Instructions for using ssh and other login tools, as well as recommended clients for different operating systems, are here. You can also access the HPC system through our Web-based interface Open OnDemand or FastX.
    Off Campus?

  • Research Data Storage

    $(document).ready(function() { $("#status-message").hide(); updateStatusMessages(); setInterval(checkStatusMessages, 6000); }); function updateStatusMessages() { $.getJSON(‘https://tja4lfp3da.execute-api.us-east-1.amazonaws.com/api/messages' , function(data) { var messageData = ‘'; $.each(data, function(key, value) { var converter = Markdown.getSanitizingConverter(); var timestamp = value.body.substr(0,19); var messageBody = value.body.substring(timestamp.length+1).replace("<br>", "
    “); var messageLength = messageBody.length; if ( messageLength = millisPerDay) { var datestr = now.toDateString(); } else { var datestr = post_date.toDateString(); } msg = “[” + datestr + “] “+ msg; } } $('#status-message-'+keywords[j]).html(converter.makeHtml(msg)); } }); }); }; There are a variety of options for storing research data at UVA. Public and internal use data storage systems can be accessed from the Rivanna and Afton high performance computing systems.

  • Graphical SFTP/SCP Transfer Tools

    Several options are available to transfer data files between a local computer and the HPC system through user-friendly, graphical methods.
    Off Campus? Connecting to Rivanna and Afton HPC systems from off Grounds via Secure Shell Access (SSH) or FastX requires a VPN connection. We recommend using the UVA More Secure Network if available. The UVA Anywhere VPN can be used if the UVA More Secure Network is not available. Only Windows and Mac OSX operating systems are supported by the Cisco client provided by ITS. Linux users should refer to these unsupported instructions to install and configure a VPN. The More Secure Network requires authentication through Duo; users should follow the instructions on the dialog box to enter "

  • Rivanna Maintenance: December 18, 2020

    Rivanna will be taken down for routine maintenance on Wednesday, December 18, beginning at 6 a.m.
    You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
    Rivanna is expected to return to service by 6 a.m. on Thursday, December 19.

  • JIRA Downtime: December 13, 2020

    The JIRA ticketing system will be taken offline on Friday, December 13 from 6 p.m. to 9 p.m. while our system engineers continue the process of migrating the ticketing system from a local environment to the cloud. Please avoid submitting requests during this period if possible. Although moving to a cloud-based ticketing system will improve the speed and efficiency of our customer service in the long run, in the short-term it may cause disruptions for some users.
    If you are unable to log in to JIRA after the migration is completed, you will need to change your password using your UVA e-mail address.

  • Data Transfer

    Data transfer Public & Moderately Sensitive Data Transfer Secure Copy (scp) scp uses secure shell (SSH) protocol to transfer files between your local machine and a remote host. scp can be used with the following syntax:
    scp [source] [destination]
    scp SourceFile mst3k@login.hpc.virginia.edu:/scratch/mst3k
    scp SourceFile mst3k@login.hpc.virginia.edu:/project/Grouper_group_name
    Detailed instructions and examples for using scp are listed here.
    Secure File Transfer Protocol (sftp) sftp is a network protocol for secure file management. Instructions and examples for using sftp are located here.
    Graphical File-Transfer Applications Filezilla and Cyberduck, and MobaXterm are examples of open source SFTP client software for file management through an interactive graphical user interface.

  • Research Project Storage

    Overview The Research Project Storage file system provides users with a collaborative space for data storage and sharing. Public, internal use, and sensitive research data can be stored in Research Project storage, and UVA Information Security provides details about data sensitivity classifications. Members in the same group have access to a shared directory created by the team lead or PI. Group membership can be defined and managed through Grouper (requires VPN connection). /project storage is mounted on the HPC cluster and runs on a new scale-out NAS file system.
    If you are not a researcher, UVA ITS offers Value storage for long-term storage of large scale data.

  • Important Notes from the 17 September Rivanna Maintenance

    Learn about recent changes implemented during the Sept. 17, 2019 maintenance.
  • FastX Web Portal

    Overview FastX is a commercial solution that enables users to start an X11 desktop environment on a remote system. It is available on the UVA HPC frontends. Using it is equivalent to logging in at the console of the frontend.
    Using FastX for the Web We recommend that most users access FastX through its Web interface. To connect, point a browser to:
    https://fastx.hpc.virginia.edu
    Off Campus? Connecting to Rivanna and Afton HPC systems from off Grounds via Secure Shell Access (SSH) or FastX requires a VPN connection. We recommend using the UVA More Secure Network if available. The UVA Anywhere VPN can be used if the UVA More Secure Network is not available.

  • Open OnDemand

    Overview Open OnDemand is a graphical user interface that allows access to UVA HPC via a web browser. Within the Open OnDemand environment users have access to a file explorer; interactive applications like JupyterLab, RStudio Server & FastX Web; a command line interface; and a job composer and job monitor.
    Logging in to UVA HPC The HPC system is accessible through the Open OnDemand web client at https://ood.hpc.virginia.edu. Your login is your UVA computing ID and your password is your Netbadge password. Some services, such as FastX Web, require the Eservices password. If you do not know your Eservices password you must change it through ITS by changing your Netbadge password (see instructions).

  • Open OnDemand: File Explorer

    Open OnDemand provides an integrated file explorer to browse and manage small files. Rivanna and Afton have multiple locations to store your files with different limits and policies. Specifically, each user has a relatively small amount of permanent storage in his/her home directory and a large amount of temporary storage (/scratch) where large data sets can be staged for job processing. Researchers can also lease storage that is accessible on Rivanna. Contact Research Computing or visit the storage website for more information.
    The file explorer provides these basic functions:
    Renaming of files Viewing of text and small image files Editing text files Downloading & uploading small files To see the storage locations that you have access to from within Open OnDemand, click on the Files menu.

  • Open OnDemand: Job Composer

    Open OnDemand allows you to submit Slurm jobs to the cluster without using shell commands.
    The job composer simplifies the process of:
    Creating a script Submitting a job Downloading results Submitting Jobs We will describe creating a job from a template provided by the system.
    Open the Job Composer tab from the Open OnDemand Dashboard.
    Go to the New Job tab and from the dropdown, select From Template. You can choose the default template or you can select from the list.
    Click on Create New Job. You will need to edit the file that pops up, so click the light blue Open Editor button at the bottom.

  • Pulse Laser Irradiation and Surface Morphology

    Dr. Zhigilei and his team are using Rivanna to perform large-scale atomistic simulations aimed at revealing fundamental processes responsible for the modification of surface morphology and microstructure of metal targets treated by short pulse laser irradiation. The simulations are performed with a highly-optimized parallel computer code capable of reproducing collective dynamics in systems consisting of up to billions of atoms. As a result, the simulations naturally account for the complexity of the material response to the rapid laser energy deposition and provide clear visual representations, or “atomic movies,” of laser-induced dynamic processes. The mechanistic insights revealed in the simulations have an immediate impact on the development of the theoretical understanding of laser-induced processes and assist in optimization of laser processing parameters in current applications based on laser surface modification and nanoparticle generation in laser ablation.
  • Fluid Dynamics and Reef Health

    Professor Reidenbach and his team are using Rivanna to run computational fluid dynamics simulations of wave and tide driven flows over coral reefs in order to determine how storms, nutrient inputs, and sediments impact reef health. This is an image of dye fluxing from the surface of the Hawaiian coral Porites compressa utilizing a technique known as planar laser induced fluorescence (PLIF). Reefs such as this one have been severely impacted by human alteration, both locally through additional inputs of sediments and nutrients, and globally through increased sea surface temperatures caused by climate change. Reidenbach is hopeful that his computational models will allow scientists to better predict the future health of reefs based on human activity and improve global reef restoration efforts.
  • Economic Market Behavior

    While conducting research for a highly-technical study of market behavior, Dr. Ciliberto realized that he needed to parallelize an integration over a sample distribution. RC staff member Ed Hall successfully parallelized Ciliberto’s Matlab code and taught him how to do production runs on the University’s high-performance clusters. “The second stage estimator was computationally intensive,” Ciliberto recalls. “We needed to compute the distribution of the residuals and unobservables for multiple parameter values and at many different points of the distribution, which requires parallelizing the computation. Ed Hall’s expertise in this area was crucial. In fact, without Ed’s contribution, this project could not have been completed.
  • Tracking Bug Movements

    Ed Hall worked with the Brodie Lab in the Biology department, to set up a workflow to analyze videos of bug tracking experiments on the Rivanna Linux cluster. They wanted to use the community Matlab software (idTracker) for beetle movement tracking. Their two goals were to shorten the software runtime and to automate the process. There was a large backlog of videos to go through. Ed installed the idTracker software on Rivanna and modified the code to parallelize the bug tracking process. He wrote and documented shell scripts to automate their workflow on the cluster.
    PI: Edmund Brodie, PhD (Department of Biology)

  • Logging in to the UVA HPC systems

    The UVA HPC systems (Rivanna and Afton) are accessible through a web portal, secure shell terminals, or a remote desktop environment. For of all of these access points, your login is your UVA computing ID and your password is your Eservices password. If you do not know your Eservices password you must change it through ITS.
    Off Campus? Connecting to Rivanna and Afton HPC systems from off Grounds via Secure Shell Access (SSH) or FastX requires a VPN connection. We recommend using the UVA More Secure Network if available. The UVA Anywhere VPN can be used if the UVA More Secure Network is not available.

  • MobaXterm

    MobaXterm is the recommended login tool for Windows users. It bundles a tabbed ssh client, a graphical drag-and-drop sftp client, and an X11 window server for Windows, all in one easy-to-use package. Some other tools included are a simple text editor with syntax coloring and several useful Unix utilities such as cd, ls, grep, and others, so that you can run a lightweight Linux environment on your local machine as well as use it to log in to a remote system.
    Download To download MobaXterm, click the link below. Select the “Home” version, “Installer” edition,
    Download MobaXterm
    Run the installer as directed.

  • Slurm Job Manager

    var cursor = true; var speed = 280; setInterval(() = { if(cursor) { document.getElementById(‘cursor’).style.opacity = 0; cursor = false; }else { document.getElementById(‘cursor’).style.opacity = 1; cursor = true; } }, speed); // Add event listener on keydown document.addEventListener(‘keydown’, (event) = { var name = event.key; if ( name === ‘y’ ) { window.location.href = “/quiz/slurm/"; } else if ( name === ‘n’ ) { $('#slurm-modal').modal(‘hide’); } }, false); SLURM Would you like to take an interactive SLURM quiz? y/N |
    Overview UVA HPC is a multi-user, managed environment. It is divided into login nodes (also called frontends), which are directly accessible by users, and compute nodes, which must be accessed through the resource manager.

  • Quantifying Cerebral Cortex Regions

    A powerful new technique for quantifying regions of the cerebral cortex was developed by Nick Tustison and James Stone at the University of Virginia along with collaborators from the University of Pennsylvania. It was evaluated using large data sets comprised of magnetic resonance imaging (MRI) of the human brain processed on a high-performance computing cluster at the University of Virginia. By making this technique available as open-source software, other neuroscientists are now able to investigate various hypotheses concerning the relationship between brain structure and development. Tustison’s and Stone’s software has been widely disseminated and is being actively incorporated into a variety of clinical research studies, including a collaborative effort between the Department of Defense and Department of Veterans Affairs, exploring the long term effects of traumatic brain injury (TBI) among military service members.
  • Message Passing Interface (MPI) and UVA HPC

    Overview MPI stands for Message Passing Interface. The MPI standard is defined by the Message Passing Interface Forum. The standard defines the interface for a set of functions that can be used to pass messages between processes on the same computer or on different computers. MPI can be used to program shared memory or distributed memory computers. There is a large number of implementations of MPI from various computer vendors and academic groups. MPI is supported on the HPC clusters.
    MPI On the HPC System MPI is a standard that describes the behavior of a library. It is intended to be used with compiled languages (C/C++/Fortran).

  • NVHPC

    Compiling for a GPU Using a GPU can accelerate a code, but requires special programming and compiling. Several options are available for GPU-enabled programs.
    OpenACC OpenACC is a standard
    Available NVIDIA CUDA Compilers ModuleVersion Module Load Command cuda10.2.89 module load cuda/10.2.89 cuda11.4.2 module load cuda/11.4.2 cuda11.8.0 module load cuda/11.8.0 cuda12.2.2 module load cuda/12.2.2 cuda12.4.1 module load cuda/12.4.1 ModuleVersion Module Load Command nvhpc24.1 module load nvhpc/24.1 nvhpc24.5 module load nvhpc/24.5 GPU architecture According to the CUDA documentation, “in the CUDA naming scheme, GPUs are named sm_xy, where x denotes the GPU generation number, and y the version in that generation.

  • Software Modules

    The lmod modules system on the HPC system enables users to easily set their environments for selected software and to choose versions if appropriate.
    The lmod system is hierarchical; not every module is available in every environment. We provide a core environment which contains most of the software installed by Research Computing staff, but software that requires a compiler or MPI is not in that environment and a compiler must first be loaded.
    View All Modules   Basic Commands List all available software
    module avail $('#copybtn759341826').click(function(){ var $temp = $(""); $(“body”).append($temp); $temp.val($('#759341826').text()).select(); document.execCommand(“copy”); $temp.

  • UVA HPC Software List

    function searchFunction() { var input, filter, table, tr, td, i, txtValue; input = document.getElementById(“searchInput”); filter = input.value.toUpperCase(); table = document.getElementById(“moduleTable”); tr = table.getElementsByTagName(“tr”); for (i = 0; i -1) { tr[i].style.display = “"; } else { tr[i].style.display = “none”; } } } } Module Category Description R lang R is a free software environment for statistical computing and graphics. abinit chem ABINIT is a package whose main program allows one to find the total energy, charge density and electronic structure of systems made of electrons and nuclei (molecules and periodic solids) within Density Functional Theory (DFT), using pseudopotentials and a planewave or wavelet basis.
  • Tools for Research

    Tools and software projects that UVA Research Computing has collaborated on:
    LOLAweb LOLAweb is a web server and interactive results viewer for enrichment of overlap between a user-provided query region set (a bed file) and a database of region sets. It provides an interactive result explorer to visualize the highest ranked enrichments from the database. LOLAweb is a web interface to the LOLA R package. Launch LOLAweb BARTweb There are a number of commercially licensed tools available to UVa researchers for free. These products, including UVa Box, Dropbox (Health System) and CrashPlan, are most suitable for small-scale storage needs.

  • What is Research Computing?

    UVA Research Computing (RC) is a new program that aims to support computational biomedical research by providing advanced cyberinfrastructure and expertise in data analysis at scale. Our mission is to foster a culture of computational thinking and promote interdisciplinary collaboration in various data-driven research domains. We offer services related to high performance computing, cloud architecture, scientific programming and big data solutions. We also aim to promote computationally intensive research at UVA through collaborative efforts such as UVA’s own CADRE (Computation And Data Resource Exchange) and XSEDE (Extreme Science and Engineering Discovery Environment).
    One of our driving philosophies is that researchers already have medical and scientific expertise, and should not have to become computing experts on top of that.

  • Computing Environments at UVA

    Research Computing (UVA-RC) serves as the principal center for computational resources and associated expertise at the University of Virginia (UVA). Each year UVA-RC provides services to over 433 active PIs that sponsor more than 2463 unique users from 14 different schools/organizations at the University, maintaining a breadth of systems to support the computational and data intensive research of UVA’s researchers.
    High Performance Computing  UVA-RC’s High Performance Computing (HPC) systems are designed with high-speed networks, high performance storage, GPUs, and large amounts of memory in order to support modern compute and memory intensive programs. UVA-RC operates two HPC systems, Rivanna and Afton.