Web225 stopped responding several minutes ago and has not come back online following several attempted reboots. We’re working to restore service at this time and will update this post when we have more information.

2012-12-22 02:33 UTC: The server is still offline and not responding.

2012-12-22 03:51 UTC: We’ve booted the server into a rescue environment and we’re checking the state of the RAID, disks, and filesystems.

2012-12-22 05:01 UTC: We’re currently checking the filesystem for errors by running the fsck command.

2012-12-22 07:20 UTC: The file system checks have finished and the file systems are now in a clean state. We’re now working to get the RAID controller card replaced.

2012-12-22 10:14 UTC: At this time the server will not boot because of a bad hard drive. We’re replacing the hard drive that is causing the boot errors and we will continue working to bring the server back online as soon as possible.

2012-12-22 11:38 UTC: The hard drive has been replaced and server is now booting. However, the boot process is stopped at a kernel panic. We are working to resolve the cause of the panic now.

2012-12-22 12:39 UTC: The server is now booting correctly and the kernel panics have been resolved. We’re now working to restore full access to the machine and verify its functionality.

2012-12-22 14:17 UTC: In verifying the functionality of the server we’ve found that there was widespread corruption in both the PostgreSQL and MySQL databases. We’ve reinstalled and reinitialized both database servers and we’re now working to restore the databases from backups.

2012-12-22 15:20 UTC: We are still restoring the databases from backups.

2012-12-22 17:11 UTC: The MySQL databases have been mostly restored and we’re beginning to work on the PostgreSQL databases.

2012-12-22 19:11 UTC: The MySQL and PostgreSQL databases have been restored and the server is back online

2012-12-23 10:22 UTC: The server’s disks have gone read-only again and we are working on the server now.

2012-12-23 12:23 UTC: The server is back as of now but in case the problem returns we have started a copy full copy of the data to a spare server.