The HPC cluster will be down for maintenance on Tuesday, Oct 15, 2024 beginning at 6 am. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until the cluster is returned to service.
All systems are expected to return to service by Wednesday, Oct 16 at 6 am.
IMPORTANT MAINTENANCE NOTES Expansion of /home To transition away from the Qumulo filesystem, we will migrate all /home directories to the GPFS filesystem and automatically increase each user’s /home directory limit to 200GB.
Read more →
Due to the new [licensing restrictions by Anaconda](https://legal.anaconda.com/policies/en?name=terms-of-service#terms-of-service) on research usage, the licensed Anaconda distribution will be removed from the system on October 15, 2024. The current anaconda/2023.07-py3.11 module will redirect to the miniforge/24.3.0-py3.11 module, switching to conda-forge as the default package installation channel with fewer preinstalled packages. Existing environments will not be affected. However, using Anaconda default channels for research without a personal license will violate the Anaconda license. For instructional use, package installation from licensed channels is still allowed Maintenance: Oct 15, 2024 The UVA high-performance computing (HPC) system will be down for maintenance on **Tuesday, Oct 15, 2024,** beginning at 6 a.
Read more →
When: Sep 03, 2023, 01:00 PM EST (US and Canada).
What: Join us for a first of its kind Women in High Performance Computing (WHPC) event, co-hosted by the Northeast, Purdue, and Virginia WHPC chapters to engage in discussions about current challenges and opportunities for fostering a more diverse and inclusive WHPC community. Our panelists will provide an overview of their chapters’ community activities and offer insights into navigating hurdles and fostering professional growth within HPC.
The format for this event will be a brief presentation by the hosts followed by theme-focused breakout rooms where participants will be invited to share their experience and personal perspective.
Read more →
Please join us at the Research Computing Open House on Tuesday, September 17, 2024, from 2-5 p.m. in the Commonwealth Room at Newcomb Hall. We are excited to host the UVA community to share updates on a new supercomputer and services that we are offering.
Why Attend? Talk with research computing experts and staff. Have your questions answered. Receive the latest information on research computing at UVA, including Afton, UVA’s new supercomputer, and other high-performance computing resources Secure compute & storage solutions Research collaboration and grant support services the Data Analytics Center: dataset management and analytics including AI the Digital Technology Core: use of wearables, smartwatches, smartphones or IoT devices in your research Upcoming RC workshops Learn about our student workers program Enjoy light refreshments How to Attend You are welcome to drop in anytime during the event and stay as long as you would like.
Read more →
The HPC cluster will be partially down for maintenance on Tuesday, Aug 13, 2024 beginning at 6 a.m. The following nodes will be unavailable during this period:
all of parallel afton nodes in standard and interactive A40 GPU nodes in gpu The nodes are expected to return to service by Wednesday, Aug 14 at 6 a.m.
There is no impact on other nodes or services. Jobs on other nodes will continue to run.
Read more →
All positions are currently filled
Read more →
On Sep 1, 2024 RC system engineers will reinstate a file purging policy for personal /scratch folders on the Afton and Rivanna high-performance computing (HPC) systems. From Sep 1 forward, scratch files that have not been accessed for over 90 days will be permanently deleted on a daily rolling basis. This is not a new policy; it is a reactivation of an established policy that follows general HPC best practices.
The /scratch filesystem is intended as a temporary work directory. It is not backed up and old files must be removed periodically to maintain a stable HPC environment.
Key Points: Purging of personal scratch files will start on Sep 1, 2024.
Read more →
Our new supercomputer, “Afton,” is now available for general use. This represents the first major expansion of RC’s computing resources since Rivanna's last hardware refresh in 2019. Afton represents a substantial increase in the High-Performance Computing (HPC) capabilities available at UVA, more than doubling the available compute capacity. Each of the 300 compute nodes in the new system has 96 compute cores, an increase from a maximum of 48 cores per node in Rivanna. The increase in core count is augmented by a significant increase in memory per node. Each Afton node boasts a minimum of 750GB of memory, with some supporting up to 1.
Read more →
Rivanna will be down for maintenance on Tuesday, May 28, 2024 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service. While drive mapping and project storage will be unavailable, other storage will remain accessible through Globus.
All systems are expected to return to service by Thursday, May 30 at 6 a.m.
IMPORTANT MAINTENANCE NOTES Hardware and partition changes afton: We are pleased to announce the addition of 300 nodes, 96 cores each, based on the AMD EPYC 9454 architecture.
Read more →
Rivanna will be taken down for maintenance in 2024 on the following days:
Tuesday, February 6 Tuesday & Wednesday, May 28 & 29 Tuesday, July 2 Tuesday, October 15 Thursday, December 12 Please plan accordingly. Questions about the 2024 maintenance schedule should be directed to our user services team.
Read more →
About This Event: The Research Computing Exhibition will be held on Tuesday, April 23, 2024 in the Newcomb Hall Ballroom. The event will include:
A panel discussion made up of academic scientific computing experts and research computing faculty and staff. Judged poster session with prizes: First Place: $3,000 travel voucher Second Place: $2,000 travel voucher Third Place: $1,000 travel voucher Light refreshments
If you would like to participate in the poster session, please fill out the Intent to Participate Form by Friday, March 15. While all are welcome to present a poster during the exhibition, only UVA affiliated non-faculty submissions will be eligible for the award prizes.
Read more →
Rivanna, Research Project storage, and Research Standard storage will be down for maintenance on Tuesday, February 6 beginning at 6 a.m. You may continue to submit jobs to Rivanna until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service. All systems are expected to return to service by 6 a.m. on Wednesday, February 7.
UVA’s Facilities Management (FM) group will be updating the data center power grid during the maintenance period. This work is expected to be completed by 6 a.m. on 2/7.
Read more →
The Data Analytics Center is UVA’s new hub for the management and analysis of your large research data. Need help with your computational research? DAC staff specialize in key domain areas such as image processing, text analysis, bioinformatics, computational chemistry and physics, neural networks, and more. And because the DAC team is located within Research Computing, they can assist in getting your workflows running on the University’s high-performance cluster or secure data system. They can answer your basic computational questions or, through funded engagements, be embedded in your projects.
Big data doesn’t have to be a big deal. Learn how DAC can assist with your computational research – schedule an initial consultation with one of their data analysts by submitting a consultation request.
Read more →
Rivanna will be down for maintenance on Monday, December 18, 2023 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
All systems are expected to return to service by 6 a.m. on Tuesday, December 19.
IMPORTANT MAINTENANCE NOTES The operating system will be upgraded to Rocky 8.7 with system glibc 2.28 and GCC 8.5.0. Due to fundamental changes in system libraries, the entire software stack is rebuilt. Users should rebuild all self-compiled codes and R packages.
Read more →
Rivanna will be down for maintenance on Tuesday, October 3, 2023 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service. All systems are expected to return to service by 6 a.m. on Wednesday, October 4.
IMPORTANT MAINTENANCE NOTES New largemem nodes RC engineers will be adding 36 nodes, each with 40 cores and 750 GB total memory, to the largemem partition on Rivanna. Jobs that need more memory than 9 GB per core should be submitted to the largemem partition rather than the standard partition.
Read more →
VA-WHPC September Event - Leadership Journeys Time: Sep 19, 2023, 01:00 PM EST (US and Canada).
Join us for our next community event featuring Dr. Neena Imam as she shares her personal view of challenges and successes experienced throughout her inspiring leadership journey in research, HPC and AI computing. Come learn about career strategies, ask questions, and contribute to our discussion of how the playing field may be leveled to offer equitable IT & HPC leadership opportunities for women and minorities.
Dr. Imam earned a PhD in Electrical Engineering and has been engaged in research and computing in a variety of roles.
Read more →
Rivanna will be taken down for maintenance in 2023 on the following days:
Tuesday, March 7 Tuesday, May 30 Tuesday, October 3 Monday, December 18 Please plan accordingly. Questions about the 2023 maintenance schedule should be directed to our user services team.
Read more →
During the July 18th maintenance, RC engineers installed a new /scratch file storage system on Rivanna. We have created sample scripts and instructions to help you transfer your files from the previous file system to the new one.(Expand the link below for details.) The previous scratch filesystem, now called /oldscratch, will be permanently retired on October 17, 2023 and all the data it contains will be deleted.
Users should clean up their /oldscratch directory in preparation, to minimize the load. A sample script is posted below.
Modified queue limits have been implemented to provide maximum read/write performance of the new /scratch filesystem.
Read more →
Rivanna will be down for maintenance on July 18, 2023 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
All systems are expected to return to service by 6 a.m. on Wednesday, July 19.
IMPORTANT MAINTENANCE NOTES New scratch RC engineers will be installing a new /scratch storage filesystem that can be accessed at /scratch/$USER after the end of maintenance.
Modified queue limits will be implemented to provide maximum read/write performance of the new /scratch filesystem.
Read more →
Rivanna will be down for maintenance on May 30, 2023 beginning at 6 a.m. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until Rivanna is returned to service.
All systems are expected to return to service by 6 a.m. on Wednesday, May 31.
IMPORTANT MAINTENANCE NOTES Five RTX3090 nodes (4 GPU devices each) have been added to the gpu partition - use --gres=gpu:rtx3090 in Slurm script.
Modules The toolchains gompic gcccuda goolfc will be removed from Rivanna during the maintenance period, since we now have CUDA-aware toolchains based on gcc/11.
Read more →