During the July 18th maintenance, RC engineers installed a new /scratch file storage system on Rivanna. We have created sample scripts and instructions to help you transfer your files from the previous file system to the new one.(Expand the link below for details.)
The previous scratch filesystem, now called /oldscratch
, will be permanently retired on October 17, 2023 and all the data it contains will be deleted.
Users should clean up their /oldscratch
directory in preparation, to minimize the load. A sample script is posted below.
Modified queue limits have been implemented to provide maximum read/write performance of the new /scratch filesystem. Please refer to our updated documentation and adjust your job scripts accordingly.
Transfer Instructions
Example script to copy files
#!/bin/bash
#SBATCH -A your_allocation # to find your allocation, type "allocations"
#SBATCH -t 12:00:00 # up to 7-00:00:00 (7 days)
#SBATCH -p standard
rsync -av /oldscratch/$USER/ /scratch/$USER
The script will also be available through the Open OnDemand Job Composer:
- Go to Open OnDemand Job Composer
- Click: New Job -> From Template
- Select
demo-copy-scratch
- In the right panel, click “Create New Job”
- This will take you to the “Jobs” page. In the “Submit Script” panel at the bottom right, click “Open Editor”
- Enter your own allocation. You may edit the script as needed. Click “Save” when done.
- Going back to the “Jobs” page, select
demo-copy-scratch
and click the green “Submit” button.
As we expect a high volume of data migration, please refrain from doing so directly on the login nodes but instead submit it as a job via the provided Slurm script as described above.
The new scratch is subject to the same 10 TB quota and 90-day purge policy. There is no restriction on the number of files. A friendly reminder that scratch is intended as a temporary work directory, not long-term storage space. It is not backed up and old files need to be purged periodically for system stability. RC offers a number of low-cost storage options to researchers. For more information, visit our storage page.