/tag/afton

  • HPC Maintenance Schedule for 2025

    Rivanna/Afton will be taken down for maintenance in 2025 on the following days:
    Winter: Tuesday, January 7 Spring: TBD Summer: TBD Fall: TBD Please plan accordingly. Questions about the 2025 maintenance schedule should be directed to our user services team.

  • HPC Maintenance: Jan 7, 2025

    The HPC cluster will be down for maintenance on Tuesday, Jan 7, 2025 beginning at 6 am. All systems are expected to return to service by Wednesday, Jan 8 at 6 am.
    IMPORTANT MAINTENANCE NOTES How should I prepare and what to expect on Jan 7, 2025? You may continue submitting jobs to the HPC system until the maintenance period begins. However, if the system determines your job won’t finish in time, it will not start until the system is back online. The maintenance will involve upgrading the storage client, requiring all compute and login nodes, including the Open OnDemand and FastX portals, to be taken offline.

  • Access to HPC Resources

    Compute time on Rivanna/Afton is available through two service models.

    Service Unit (SU) Allocations. One SU corresponds to one core-hour. Multiple SUs make up what is called an SU allocation (e.g., a new allocation = 1M SUs). Dedicated Computing. This model allows researchers to lease hardware managed by Research Computing (RC) as an alternative to purchasing their own equipment. It provides dedicated access to HPC resources with no wait times.

  • Afton Cluster Dedicated to Prof. John Hawley

    On September 16, 2024, RC dedicated the new Afton computing cluster to the memory of John F. Hawley (1958-2021), late Professor of Astronomy who was a leading researcher in computational astrophysics. He also served in the Office of the Dean of the College and Graduate School of Arts and Sciences for nine years, first as Associate Dean for the Sciences and later as Senior Associate Dean for Academic Affairs. The ceremony featured some remarks by Josh Baller, Associate Vice President for Research Computing, and Scott Ruffner, Director of Infrastructure for Research Computing, along with a recorded message from Provost Ian Baucom.
  • High-Security Standard Storage Maintenance: Oct 15, 2024

    The Ivy Virtual Machines (VMs) and high security zone HPC system will be down for storage maintenance on Tuesday, Oct 15, 2024, beginning at 6 a.m. The system is expected to return to full service by 6 a.m. on Wednesday, Oct 16.. IMPORTANT MAINTENANCE NOTES During the maintenance all VMs will be down as well as the UVA Ivy Data Transfer Node (DTN) and Globus services. The High-Security HPC cluster will also be unavailable for all job scheduling and viewing.
    If you have any questions about the upcoming Ivy system maintenance, you may contact our user services team.
    Ivy Central Storage transition to HSZ Research Standard To transition from old storage hardware, we have retired the Ivy Central Storage and replaced it with the new High Security Zone Research Standard storage.

  • HPC Maintenance: Oct 15, 2024

    The HPC cluster will be down for maintenance on Tuesday, Oct 15, 2024 beginning at 6 am. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until the cluster is returned to service.
    All systems are expected to return to service by Wednesday, Oct 16 at 6 am.
    IMPORTANT MAINTENANCE NOTES Expansion of /home To transition away from the Qumulo filesystem, we will migrate all /home directories to the GPFS filesystem and automatically increase each user’s /home directory limit to 200GB.

  • Research Computing Open House 2024

    UPDATE: The Research Computing Open House was held on a blustery, rainy day, but the spirits of the staff and attendees were not dampened. Turnout was above expectations despite the wet weather. Attendees enjoyed the buffet and their interactions with RC staff.
    The winners of the random-drawing prizes were
    Maria Luana Morais, SOM Matt Panzer, SEAS Artun Duransoy, SEAS Please join us at the Research Computing Open House on Tuesday, September 17, 2024, from 2-5 p.m. in the Commonwealth Room at Newcomb Hall. We are excited to host the UVA community to share updates on a new supercomputer and services that we are offering.

  • HPC Maintenance: Aug 13, 2024

    The HPC cluster will be partially down for maintenance on Tuesday, Aug 13, 2024 beginning at 6 a.m. The following nodes will be unavailable during this period:
    all of parallel afton nodes in standard and interactive A40 GPU nodes in gpu The nodes are expected to return to service by Wednesday, Aug 14 at 6 a.m.
    There is no impact on other nodes or services. Jobs on other nodes will continue to run.

  • Reinstatement of file purging of personal /scratch files on Afton and Rivanna

    On Sep 1, 2024 RC system engineers will reinstate a file purging policy for personal /scratch folders on the Afton and Rivanna high-performance computing (HPC) systems. From Sep 1 forward, scratch files that have not been accessed for over 90 days will be permanently deleted on a daily rolling basis. This is not a new policy; it is a reactivation of an established policy that follows general HPC best practices.
    The /scratch filesystem is intended as a temporary work directory. It is not backed up and old files must be removed periodically to maintain a stable HPC environment.
    Key Points: Purging of personal scratch files will start on Sep 1, 2024.

  • Production Release of the Afton HPC System: July 2, 2024

    Our new supercomputer, “Afton,” is now available for general use. This represents the first major expansion of RC’s computing resources since Rivanna’s last hardware refresh in 2019. Afton represents a substantial increase in the High-Performance Computing (HPC) capabilities available at UVA, more than doubling the available compute capacity. Each of the 300 compute nodes in the new system has 96 compute cores, an increase from a maximum of 48 cores per node in Rivanna. The increase in core count is augmented by a significant increase in memory per node. Each Afton node boasts a minimum of 750GB of memory, with some supporting up to 1.
  • Rivanna Maintenance Schedule for 2024

    Rivanna/Afton will be taken down for maintenance in 2024 on the following days:
    Tuesday, February 6 Tuesday & Wednesday, May 28 & 29 Tuesday, July 2 Tuesday, October 15 Thursday, December 12 -> Postponed to Tuesday, January 7, 2025 Please plan accordingly. Questions about the 2024 maintenance schedule should be directed to our user services team.