Skip to main content

Cluster available again – tape recovery ongoing

Posted by on Thursday, May 31, 2018 in Active Cluster Status Notice.

Update, 6/13/2018, 6PM: The cluster is now accessible again. It appears that jobs accessing data from /dors were not impacted, but those accessing data on /home, /scratch, or /data were likely lost. Some may have been automatically restarted by SLURM.

We have resumed restoring files resulting from the hardware failure.

Please reach out to us if you have any concerns, questions, or pressing needs.


Update, 6/13/2018: The hardware vendor fixed the bad firmware issue on the storage array last night. This morning, however, when we attempted to bring the logical volumes back online in GPFS the command hung.

This has impacted the ability to log in to the cluster and to run commands accessing home directories. We are working with IBM support to resolve this issue and will update you as we have more information.

Update, 6/12/2018: Tape recovery of /scratch and /data ongoing; some files will be unavailable from 4:30pm this afternoon until 8am tomorrow morning

The hardware vendors were unable to recover the logical volume in their last attempt. We will continue to restore files from our tape library as quickly as we can.

This afternoon beginning at 4:30PM we will be bringing down five additional logical volumes (same ones as last week) so engineers from the hardware vendor can correct a failed firmware patch they attempted to apply to one of the controllers in the impacted storage appliance. As a result, you may lose access to some files on /scratch and /data until tomorrow morning at 8AM. I apologize for the short notice, but we only recently received a response from the vendor and this work is urgent as we only have a single functioning controller within that storage appliance at the moment.


Update, 6/8/2018: Logical volume with /scratch and /data files unable to be restored; tape recovery ongoing

Unfortunately the hardware vendor was again unable to restore the logical volume. It’s possible they may try one last time next week but at this point we are operating under the assumption that all impacted data needs to be restored from our tape library.

We have already restored a large number of files and will continue to do so over the weekend. We will send an update early next week. In the meantime, please let us know if you need access to any critical files that were impacted so we can prioritize that restore (to the best of our ability).

Next week we also plan to work with the vendor to address the faulty hardware / firmware to ensure this problem does not occur again.

Please reach out if you have any questions or concerns.

Update, 6/7/2018: 5% of files on /scratch and /data still unavailable, more files will be offline starting at 4:30pm and going overnight

The hardware vendor was again unsuccessful last night recovering the logical volume. They are going to try one last time tonight. This means that the other five logical volumes will again be going offline around 4:30PM today.

While we are hopeful engineers are able to resolve the problem tonight, we are prepared to be told that the logical disk is unrecoverable. Fortunately, all data are backed up to tape and since early this week we have already been restoring impacted files.

The total amount of impacted data is on the order of 65TB (/scratch and /data is close to 1PB total), however it will require reading 216TB of tape in order to recover these data. We expect this process will take 1-2 weeks. If you have data you need urgently, please let us know and we will do our best to prioritize these recoveries, although we are at times limited depending on exactly how and where the data are stored in our tape library.


Update, 6/6/2018: 5% of files on /scratch and /data still unavailable, more files will be offline starting at 4:30pm and going overnight

Engineers from the hardware vendor worked overnight on this issue and have identified a bug in the storage array’s controller that they believe to be the root cause of the issues that began last week. They will be working again tonight in attempt to correct this bug.

The five logical volumes within this storage array that were not impacted by this bug are back online today for read access only (any writes of new files or modifications to existing files will automatically be moved to other volumes on other storage arrays).

During the maintenance tonight these disks will again be brought offline, so older files that already existed on these disks will again be inaccessible.

We will be taking these five volumes offline this afternoon beginning at 4:30PM, and plan to bring them back online tomorrow morning around 8AM.

We again apologize for this interruption to your research. We understand this is frustrating and can assure you we are doing all we can. Please reach out if you need special assistance of any kind.


Update, 6/5/2018: Maintenance on /scratch and /data starting at 8pm tonight to attempt fix on disk issue; request a ticket if you need access to an affected file

Unfortunately we have been unable to restore the logical disk used by /scratch and /data on the ACCRE cluster. Under guidance from the hardware vendor we will be performing maintenance tonight in an attempt to restore the disk. This will require us to temporarily take additional logical disks offline, so more files on /scratch and /data may be unavailable between 8PM tonight and 8AM tomorrow morning. We apologize if you are impacted by this work.


Update, 6/3/2018: 5% of files on /scratch and /data are inaccessible due to disk failure; request a ticket if you need access to an affected file

We have continued working on this hardware problem throughout the weekend. Specifically, we have made several attempts (with guidance from the hardware vendor) to make the logical disk visible to one or both controllers within the storage appliance, but to no avail. At the direction of the vendor, we have updated the firmware on the impacted device in order to produce additional logging information, which, we hope, will lead to a diagnosis of the problem.

We have also determined that roughly 5% of all files on /scratch and /data are impacted. Please reach out to us via our Helpdesk if you have critical files impacted by this hardware problem.

Based on interactions with the vendor, it appears unlikely this issue will be fixed today or even within the next few days, so please reach out to us if you have been impacted and need assistance of any kind.

We are very sorry for the inconvenience and the impact on your research that this may be causing.


Update, 6/1/2018: Parts of /scratch and /data may be inaccessible due to disk failure

We have been working with the vendor in an attempt to resolve the hardware problem of an unrecognized logical disk as quickly as possible. Unfortunately parts of scratch and data are still inaccessible. The symptom of this problem will in most cases be errors like “input/output error” when attempting to read data from /scratch or /data. We will keep everyone updated as we continue working to try to diagnose and resolve this problem.

If you are working under a tight deadline and the inaccessibility of a critical file is preventing you from moving forward, please open a Helpdesk ticket with us. In some cases we may be able to help.


Original post, 5/29/2018:

At approximately 8:30pm tonight a disk controller in one of our storage arrays failed. The array automatically failed over to the redundant controller but one of the six logical disks was not recognized by the redundant controller.

The logical disk that is not accessible right now is part of the /scratch and /data filesystem.  Therefore, while the cluster is still up some files stored on /scratch and /data may be inaccessible until further notice.  We are actively working with the hardware vendor to diagnose the problem and replace any faulty hardware.

ACCRE Staff

Leave a Response

You must be logged in to post a comment